Nearly every tech company with some sort of social platform is rushing to get their own AI chatbot in place, but some of these early efforts are coming out of the oven a little too soon.
Snapchat’s “My AI” is the foremost example thus far. Users are expressing concern about how it interacts with children, the level of access it has to personal information, and overbearing chat interactions. While users are not required to interact with it, the AI chatbot cannot be entirely disabled unless the “Snapchat+” premium subscription is purchased for $3.99 per month.
AI chatbot creeps out users with inappropriate interactions, denials of stored personal information
Though everyone in the tech market is seeking their own equivalent to OpenAI’s game-changing software, My AI leans on ChatGPT for its responses. But the AI chatbot has its own “personality quirks,” so to speak, and has been given the ability to interject into human conversations between the user and their contacts.
These elements are arguably doing more harm than good thus far. Snapchat’s intent is to make My AI feel more like a human companion than a machine, responding with more emotional influence and giving users the ability to customize certain aspects of it.
One of the first complaints about My AI to go viral on social media was an exchange in which a user asked the AI chatbot to find a particular restaurant nearby. The app insisted that it did not have access to the user’s location. But when the question was framed in a different way, the AI answered it by using location data the user had provided in an earlier search. When confronted with this contradiction, the AI apologized and again insisted that it was not allowed to have access to user location data.
There have been numerous other examples of similar “selective amnesia” from the AI chatbot, from randomly identifying the states or cities where users live to denying authorship of material that it created, all usually amidst a flurry of denials about its capability to perform the task in question. This has left users unsure exactly what My AI knows about them and what it is capable of doing.
When the AI chatbot is not playing coy with what it knows, it is worrying parents in the way in which it interacts with their children. The teens that have left positive reviews of the app indicate that they are using it for companionship and advice, something that could prompt oversharing of personal information. An AI app is obviously not qualified to dispense the sort of mental health advice that a teen may well seek, but some experiments with it also indicate that it does not maintain boundaries that are appropriate for minors. An investigative report from the Washington Post in March found that it would knowingly advise a 15-year-old on how to mask the smell of alcohol and marijuana, hide apps from parents, do homework and give sex advice.
My AI launched for paid users in February before becoming available for free to all users in April, and complaints of this nature were almost immediate. Snap responded that it believed users were trying to “trick” the system, but nevertheless added an age filter. The company also said that it would be adding a new “Family Center” with added child protection features, such as a log that shows parents how kids are interacting with the AI chatbot.
Regulatory attention ramping up for AI chatbots
Government regulators are looking over the AI field in general, but AI chatbots are drawing some special attention and appear to be accelerating the overall legislative process. In March, Senator Michael Bennet (D-CO) sent Snap and other chatbot companies a letter outlining concerns about children’s interactions with them. Bennet noted that 60% of American teenagers use Snapchat.
While AI chatbots have no business giving mental health advice to anyone, both parents and medical professionals are particularly worried about kids turning to these apps for the tough questions that they are typically hesitant to raise with family. Some mental health experts are advising that parents have a talk with kids about chatbots before allowing their use, making it clear to them that the information they enter is not necessarily kept private and that the AI is not capable of human thought or understanding.
In March, a grown man who was apparently in a fragile mental state was induced to suicide by a chatbot called Eliza. The man had an extreme level of anxiety about global warming and the chatbot reportedly fed those fears, creating a negative feedback loop. He eventually took his own life in the belief that the chatbot would stop global warming in return for his sacrifice, leaving behind a wife and children. This was perhaps the most shocking incident since ChatGPT became available in late 2022, but a string of worrying “hallucinations” and failures to maintain guardrails among these apps have made AI much more of a priority item for legislators.