The new tool, called Mobile Real-time Video Segmentation, is based on machine learning. The AI-based tech enables vloggers to modify the background in real-time as they like it. YouTube is reportedly testing the feature, which can be used as a filter in still images as well.
Google offered more details on the new tech on the company’s Research Blog. Leaving aside the complicated language, the company recently summed up the new segmentation technology, which enables content providers to alter the background “effortlessly” in a bid to boost the quality of their videos without the need of specialized equipment.
New Feature Is in Beta Testing
Some content creators will be able to test the feature for the YouTube app for a limited period of time. At the moment, the new tool was rolled out for the site’s fresh “stories” video format, which is a copycat of similar features on Instagram and Snapchat.
Vloggers who tested the segmentation tool noted that the feature is not perfect, but what it can do in beta is quite impressive. They believe that it can match the quality of similar tools that create facial overlays and masks for photographs.
The company announced that the technology will be tested initially for a select set of effects. Google plans to integrate the tool in its line of Augmented Reality services. The Pixel 2 is sporting an AI-based portrait mode for still images. So, it shouldn’t be a surprise the rollout of the new feature for mobile video.
Meanwhile, YouTube continues its campaign against misinformation. Last week, it disciplined some far-right vloggers for peddling conspiracy theories about the Florida school shooting. One of the content creators who saw his content banned, had claimed that one of the shooting survivors, student David Hogg, was an actor.
Image Source: Flickr