Hand holding phone with Snapchat on the screen showing children's privacy concerns with AI chatbot

Snap AI Chatbot Facing Potential Enforcement Action After Investigation of Children’s Privacy Complaints

The UK ICO has wrapped up a preliminary investigation into Snap’s AI chatbot, and has indicated that it is failing to adequately address children’s privacy risks. The provisional findings do not include an enforcement action, but one could be forthcoming if Snap is unresponsive to the preliminary enforcement notice and the agency finds that data protection law has been breached.

Snap conducted a required risk assessment before launching its “My AI” chatbot, but the UK ICO found that it did not fully address children’s privacy concerns. The new feature first launched in February of this year but was rolled out in the UK in April, with an expected impact on about 21 million users in the region in total.

Complaints indicate Snap’s AI chatbot does not stay within safety guardrails

Snap’s AI chatbot is focused on answering questions and offering advice to users about everyday situations, things like gift suggestions and ideas for food pairings. Users can also send photos to the bot as they would to other people on Snapchat, and it can use AI to generate images to send back in response. Though initially only available to Snapchat+ premium subscribers, the AI chatbot now sits at the top of the feeds of all users.

All of this is backed by OpenAI’s ChatGPT and GPT-4, and should be subject to the same sort of extensive “safety guardrails” that limit prompts and responses to filter out potentially harmful materials and protect children’s privacy. A series of complaints from UK users indicate that is not the case. It remains unclear exactly what the nature of these complaints was, but previous reporting has found that the AI chatbot would advise users on sexual topics and how to mask the smell of alcohol even after they had identified that they were minors.

Users that are determined to have fun with the AI chatbot have also managed to convince it to internalize and report things that are not true, to adopt submissive or sexualized personalities and interact with the user in that way, and to make violent suggestions. The chatbot has also been drawn into a broader cultural debate about gender-affirming care and children, with some users noting that it will advise minors to seek out hormone therapy or gender-altering surgery when asked questions of that nature.

While Snapchat faces no enforcement action at this time, follow-up action could go as far as a ban from operation in the UK if the concerns about the AI chatbot and children’s privacy issues were to remain unaddressed. Snapchat is already facing other issues involving underage users in the country, as it is currently under investigation by UK regulators for its more general practices of screening out underage users and removing them from the platform. Snapchat now requires that users be at least 13 to have an account, but there is almost nothing stopping underage users from getting on the platform. UK regulator Ofcom estimates that there are thousands of users under the age of thirteen, but that Snapchat has thus far removed only “several dozen.”

Snap has responded to the notice by saying that the AI chatbot was subject to a “robust” legal and privacy review process before deployment and that it is committed to protecting user privacy.

Children’s privacy taking center stage in early crackdown on AI tools

Children’s privacy rights have been given a boost in recent years with the introduction of the Children’s Design Code to UK law, a measure adopted in 2021 that establishes 15 standards that online platforms and services must meet. This includes documenting the personal data collected from underage users, ensuring that geolocation is turned off for these users, and not “nudging” children to provide optional personal information.

Snapchat has already made some voluntary privacy tweaks related to abuse of its AI chatbot. It has added an age filter and parental controls to MyAI, including an ability for parents to be notified when a child uses it. However, it also stores all conversations unless the user manually deletes them; the only change to that policy thus far has been a more prominent warning to users of the practice.

There are numerous concerns about AI chatbots that are not yet resolved, but children’s privacy seems to have driven much of the early action from regulators. Concern about minors was at the center of Italy’s preemptive ban of ChatGPT, and the country also slapped data processing restrictions on chatbot Replika for similar reasons. The ChatGPT ban also kicked off numerous similar investigations throughout countries in the EU bloc.

All of this comes amidst an explosion in AI chatbots; Meta alone is slated to release dozens in the near future, each with a celebrity license or some sort of theming.