Google Play tightens restrictions on AI apps following the spread of apps creating deepfake nude images

Google Issues New Guidelines for AI Apps on Google Play to Prevent Inappropriate Content

Google is taking a stand against inappropriate AI content on its platform, issuing new guidelines for developers building AI apps distributed through Google Play. The company is cracking down on apps that generate restricted content, such as sexual content and violence, and is requiring developers to offer a way for users to flag offensive content.

In addition, Google is clamping down on apps with marketing materials that promote inappropriate use cases, such as apps that undress people or create nonconsensual nude images. Apps that advertise these capabilities may be banned from Google Play, regardless of whether they can actually perform these actions.

The move comes in response to a surge in AI undressing apps that have been circulating on social media, with reports of students using AI deepfake nudes for bullying and harassment. Google’s new policies aim to keep out apps featuring harmful AI-generated content and provide a safer experience for users.

Developers are urged to rigorously test their AI tools and models to ensure user safety and privacy, and to adhere to Google’s App Promotion requirements. The company is also offering resources and best practices, such as the People + AI Guidebook, to support developers in building responsible AI apps.

Overall, Google’s efforts to curb inappropriate AI content on its platform demonstrate a commitment to user safety and privacy in the rapidly evolving world of artificial intelligence.

LEAVE A REPLY

Please enter your comment!
Please enter your name here