Google is currently working to develop their own version of “stories”, similar to Instagram and Snapchat. Today, according to their research blog, they are currently looking to improve this by testing a new kind of tech: Mobile Real-time Video Segmentation.
The blog explains how video segmentation is a common technique among “movie directors and video content creators to separate the foreground of a scene from the background, and treat them as two different visual layers”. In short, a green screen. But the folks at Google are skipping the costly and time-consuming post-process by using “convolutional neural networks” to separate a subject and its background, then allowing the creator to replace and alter the latter however they want in real time.
Google, via Youtube is already letting some of its content creators try out the new tech in their beta program. The results shown have yet to improve, with some blurred borders between subjects and backgrounds, but this kind of work-in-progress is definitely worth keeping an eye on.