Meta announced that at the beginning of May, it would label photos and videos made with AI to make it easier to see what content is on its various platforms and how it’s created.
Meta only deletes substance that was “created or altered by AI to make a person appear to say something they didn’t say,” as mentioned in a blog post by Meta’s VP of Material Policy, Monika Bickert.
The company says it worked with its oversight board to update its rules on AI-generated content. The old rules were made in 2020, before the wave of AI media that is happening now.
Related: Meta Now Wants to Label Each AI-Generated Content on Facebook and Instagram.
“We agree with the Oversight Board’s recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context,” Bickert said.
Instead of seeing less AI-generated content, we may start to see more on threads, Instagram, and Facebook. Meta hopes that these labels will provide more clarity and “additional context.” It says it might add “more prominent labels” if “transformed images, video, or audio create a particularly high risk of materially deceiving the public on a matter of importance.”
Even so, the company will still take down material that breaks any of its other rules, such as those about voter fraud, bullying, harassment, violence, or inciting violence. It will still mark the content as false or altered by its fact-checkers. This content will be marked and pushed down in your feed.
According to Meta, it will begin naming AI-created content in May 2024 and stop removing it in July solely because of that fact. Meta says that this will also give people some time to come up with their own ways to do this.