Google’s video platform, YouTube, has said it would start removing AI-generated video content mimicking identifiable individuals from next year.
YouTube announced this in a policy update released on Tuesday. In particular, the company is seeking to protect music artists whose songs are being recreated with AI. However, the removal will not be automatic as YouTube said individuals or artists affected would have to request for the removal.
The company said the new policy is based on feedback from its community, including creators, viewers, and artists, about the ways in which emerging technologies have been impacting them. It said this is reflected in true in cases where someone’s face or voice could be digitally generated without their permission or misrepresent their points of view.
Removal to begin in months
While YouTube did not specify the date the policy will come into effect, it indicated in the policy update that this will be implemented in the ‘coming months’ which suggests anytime next year.
“So in the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.
“Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests. This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar,” it said.
The video platform said it would also introduce the ability for its music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice.
“In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals. These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments. We’ll continue to expand access to additional labels and distributors over the coming months,” the company said.
AI disclosure
Meanwhile, YouTube it would soon require video makers to disclose when they have uploaded manipulated or synthetic content that looks realistic, including video that has been created using artificial intelligence tools.
The policy update, which will also go into effect sometime in the new year, could apply to videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn’t actually do.
“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management, said in the company’s blog post Tuesday.