Estimated reading time: 2 minutes
New Google Play guidelines are putting the cuffs on generative AI apps offering dubious tools, such as deepfake “undressing” apps and those producing graphic content.
The updated app store policy, announced Thursday, instructs generative AI apps and their developers to build-in precautions against offensive content, “including prohibited content listed under Google Play’s Inappropriate Content policies, content that may exploit or abuse children, and content that can deceive users or enable dishonest behaviors.”
Developers must also offer in-app flagging and reporting mechanisms for users stumbling across inappropriate content and “rigorously test” their AI models, TechCrunch reported.
The rules apply to apps that produce AI-generated content in “any combination of text, voice, and image prompt input.” This includes chatbots, image generators, and audio-spoofing apps using generative AI. The policies do not apply to apps that “merely host” AI content or those with AI “productivity tools,” such as summarizing features.
In May, Google announced it was devaluing AI-generated (or “synthetic”) porn results in its internal search rankings, attempting to dress a growing problem of nonconsensual, deepfake pornography. The company also banned advertising for websites that create, endorse, or compare deepfake pornography.
The move came after a wave of viral, celebrity-centric deepfakes circulated on X and Meta platforms, including graphic advertisements for an AI-powered undressing app that featured underage photos of actor Jenna Ortega. Google was already fielding thousands of complaints from victims of nonconsensual, sexualized deepfakes, many of whom filed Digital Media Copyright Act (DMCA) claims against websites populating their likenesses.
AI industry insiders have issued multiple warnings about the threat of misinformation and the nonconsensual use of people’s likenesses, including a recent open letter penned by OpenAI and Google DeepMind employees. The group noted the potential risk of “manipulation and misinformation” should AI advancements continue without regulation.
Google’s app store regulations follow a White House AI directive issued to tech companies last month. The announcement called on industry leaders to do more to prohibit the spread of deepfakes, with Google specifically heeding a call to curb apps that “create, facilitate, monetize, or disseminate image-based sexual abuse.” If you have been a victim of deepfake abuse, there are steps you can take; read more about how to get support.
About The Author
Discover more from Artificial Race!
Subscribe to get the latest posts sent to your email.