Google is taking proactive measures against potential issues with generative AI applications by introducing a new policy set to be implemented in the early months of the upcoming year.
This policy mandates that Android app developers, who publish their applications on the Play Store, must incorporate features that allow users to report or flag any offensive content generated by AI.
Google states that this reporting and flagging process should be accessible directly within the app, and developers are encouraged to utilize these user reports to refine their content filtering and moderation practices.
This policy update comes in response to a surge in AI-generated apps, some of which have been manipulated by users to create safe-for-work (NSFW) content, such as what occurred with the Lensa app last year.
Additionally, there are subtler issues at play, like with the Remini app that gained popularity this summer for its AI headshots, but was later discovered to be altering the appearance of women, enlarging their breasts or cleavage and slimming their figures.
More recently, there have been complications with AI tools from Microsoft and Meta, where users found workarounds to the safeguards in place, resulting in creations such as a pregnant Sonic the Hedgehog or fictional characters involved in the 9/11 attacks.
Beyond these issues, there are grave concerns regarding the use of AI image generators, especially after pedophiles were found using open-source AI tools to mass-produce child sexual abuse material (CSAM).
With elections on the horizon, there are also fears that AI could be used to generate deepfake images, aiming to deceive and manipulate the voting populace.
The new policy explicitly mentions that AI-generated content can range from “text–to-text conversational generative AI chatbots,” which includes applications similar to ChatGPT, to applications that produce images based on text, image, or voice prompts.