Meta’s Facebook and Instagram ads promoting AI apps for creating sexual “deepfake” images without consent still targeting Australians
The issue of Facebook and Instagram advertisements promoting AI apps for generating sexual “deepfake” images of people without their consent continues to persist in Australia, despite repeated concerns being raised with Meta.
As the Senate prepares to consider a law criminalizing the creation of digitally created sexually explicit content without consent, Meta, the parent company of popular social media platforms, is profiting from advertisements for apps designed to do just that.
The company’s Ad Library contains numerous examples of advertisers promoting various applications with messages and graphics that suggest using their apps to create new images of people without clothes or to swap faces to create not safe for work (NSFW) content.
In some instances, advertisers have attempted to evade detection by disguising the true nature of their apps. For example, an advertisement may only mention the app’s ability to “remove clothes from photos” in the image, while the caption promotes it as a “writing assistant.” Despite these tactics, the ads can still be easily found by searching for specific terms.
These advertisements, originating from overseas but targeted at Australian users, direct individuals to Apple iOS App Store listings that do not overtly promote the apps’ use for non-consensual sexual content creation, likely to avoid removal from the App Store’s guidelines.
While Meta’s policies prohibit posts or ads that solicit sexual content or promote non-consensual sexual imagery, the company has repeatedly allowed its advertising platform to promote these controversial applications to users.
Despite previous reports highlighting these issues, Meta continues to struggle with effectively moderating its advertising platform. A recent investigation by Crikey also uncovered advertisements selling drugs, guns, and even a monkey to Australian users.
The prevalence of deepfake images has become a growing concern, with the eSafety commissioner noting that her office commonly receives reports of such content. With thousands of these applications now available, the risks associated with non-consensual creation and distribution of deepfake images are becoming more apparent.
Meta declined to provide comment on the matter, leaving many to question the company’s commitment to addressing these serious issues. As the debate around regulating deepfake technology intensifies, the need for effective oversight and accountability from tech companies like Meta becomes increasingly urgent.