Meta Announces Major Policy Changes on Digitally Altered Media Ahead of U.S. Elections
Meta, the parent company of social media giants Facebook and Instagram, has announced major changes to its policies on digitally created and altered media ahead of the upcoming U.S. elections. The company will now start applying “Made with AI” labels to AI-generated videos, images, and audio posted on its platforms, expanding its previous policy that only addressed a narrow slice of doctored content.
In a blog post, Vice President of Content Policy Monika Bickert stated that Meta will also apply separate and more prominent labels to digitally altered media that poses a high risk of deceiving the public on important matters, regardless of whether AI was used in its creation. This shift in approach will move the company’s focus from removing manipulated content to keeping it up while providing viewers with information on how it was made.
The new labeling approach will apply to content posted on Meta’s Facebook, Instagram, and Threads services, with other services like WhatsApp and Quest virtual reality headsets covered by different rules. The more prominent “high-risk” labels will be implemented immediately.
These changes come as tech researchers warn of the potential impact of new generative AI technologies on the upcoming U.S. presidential election in November. Political campaigns have already begun using AI tools in countries like Indonesia, pushing the boundaries of guidelines set by companies like Meta and OpenAI, the leading provider of generative AI technology.
In February, Meta’s oversight board criticized the company’s existing rules on manipulated media as “incoherent” after reviewing a video of U.S. President Joe Biden that was altered to suggest inappropriate behavior. The board recommended that the policy should also apply to non-AI content, audio-only content, and videos depicting actions that never actually occurred, as they can be just as misleading as AI-generated content.
With these policy changes, Meta aims to address the growing challenge of deceptive content on its platforms and provide users with more transparency about the origin of digitally altered media. As the use of AI tools continues to evolve, tech companies like Meta will need to adapt their policies to ensure the integrity of information shared on their platforms.