With the increasing use of generative AI, Meta is working to establish new standards around the disclosure of artificial intelligence in its apps. The new policies will place more responsibility on users when it comes to declaring the use of AI in their content and establish new systems for detecting the use of AI through technical means.
“We are creating industry-leading tools capable of identifying invisible markers at scale – specifically, the ‘AI-generated’ information from the C2PA and IPTC technical standards – to be able to label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans to add metadata to the images created by their tools,” says Nick Clegg, Meta’s Vice President of Global Affairs, in a blog post.
In theory, these technical detection measures will allow Meta and other platforms to label content created with generative AI wherever it appears so that all users are better informed about content created by artificial intelligence.
Among other things, the new measures will help reduce the spread of AI-generated misinformation (texts, images, videos or audios), which could prevent (or make it difficult) situations like the one experienced by singer Taylor Swift from happening again.