What to Know Soon, you might be able to identify instantly if AI generates a picture, movie, or sound clip. |
Soon, you can know instantly if AI made a picture, movie, or sound clip. The ability to distinguish between AI-generated content and non-AI-generated content is getting more difficult. Meta wants to make it easier to spot AI content with just a glance.
On Tuesday, Meta announced that it would begin adding “visible markers” and metadata to AI-generated content that appears on Facebook, Instagram, and Threads. This includes putting labels on pictures made by AI from “Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.” For now, though, those marks will only show up on pictures. Videos and sounds are a bigger and harder problem.
“While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” the company mentioned in a blog post.
In an effort to combat this, Meta intends to require creators to label audio and video as AI-generated; failure to do so could result in “penalties.”
“If we determine that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Meta wrote in defining the penalties required.
Meta also claims it’s working on ways to automatically spot content that was made by AI, even if the person who made it deletes the unseen signs that show AI made it. It’s also working on ways to make it harder to get rid of those marks.
Meta says that these changes will happen “in the coming months,” but they didn’t give a specific date.
Also Read: Meta Will Allow Some Individuals to Choose How to Share Content Between Facebook and Instagram.