Snap, the company behind Snapchat, has come under the scrutiny of the UK’s data protection authority due to concerns about children’s privacy risks associated with its AI chatbot.
The Information Commissioner’s Office (ICO) has issued a preliminary enforcement notice to Snap, addressing what it deems a “potential failure” to adequately evaluate the privacy dangers of its AI chatbot “My AI”.
While the ICO’s action does not establish a violation, it does signal that the regulator is apprehensive about Snap’s adherence to data protection regulations, especially the Children’s Design Code introduced in 2021.
The ICO’s preliminary findings suggest that Snap’s risk assessment, conducted prior to launching “My AI”, insufficiently evaluated the data protection hazards presented by the generative AI technology, especially for children aged 13 to 17.
Snap, which launched the chatbot (powered by OpenAI’s ChatGPT) in February and made it available in the UK in April, will have the opportunity to address the ICO’s concerns before a final decision is made regarding compliance.
“My AI” was initially exclusive to Snapchat+ subscribers, but was subsequently made available to free users, even allowing the AI to send snaps back to users.
Snap asserts that the chatbot incorporates moderation and safeguarding features, including default age consideration, to ensure content generation is user-appropriate, and abstains from producing violent, hateful, sexually explicit, or generally offensive responses.
Additionally, Snap provides parental safeguarding tools that inform parents of their child’s interaction with the bot through its Family Center feature.