Some lonely kids are engaging in intense relationships with chatbots. In certain cases, the bots encourage users to harm themselves or others. Chatbot company Character.AI is the target of a new lawsuit that parents filed this week in a Texas federal court. It follows an earlier suit by a Florida mom who says Character.AI drove her son to suicide.
These AI companions use large language models, similar to the tech that powers Chat GPT or Google’s Gemini, and gives users access to characters that can be modeled after TV shows, video games, and celebrities, explains Nitasha Tiku, Washington Post tech culture reporter.
While these characters are labeled as chatbots, they are designed to be agreeable and reflect what humans are saying to them. Screenshots of these conversations, Tiku says, show that users talk to them as if they were people.
“Research has shown that even when the technology was not that good, when you as a human are talking to an anthropomorphized AI, you tend to just forget that it's an AI and start just talking to them. The impulse is to confide in them and tell them very personal things.”
As for these bots using language that could encourage hurting yourself or others, it’s impossible to pinpoint the exact causes, but Tiku points to their learning models: “What we know about these chatbots is that they are a reflection of the data that they were trained on. And that data tends to be massive amounts of text scraped from the internet.”
One information source is Reddit, which contains casual, conversational language, she notes.
Tiku also says the chatbots are people-pleasers: “If you have a kid that's complaining about his parents … restricting his screen time usage, you can see in these screenshots of conversations the way that these bots are escalating some of his frustrations. [These companies also] optimize for engagement, for keeping you in the app.”
Tiku spoke with a Texas mother whose 17-year-old autistic son was using the app. She had a close relationship with him, kept a close eye on her children’s devices, and banned them from having social media accounts.
Over the course of six months, she started noticing changes in her son. He lost weight, withdrew from the family and activities like church-going, and started acting aggressive toward his parents. As he got defensive about his screen time, Tiku says the mother discovered the screenshots: “At first, she thought that they were conversations with real people … trying to alienate her from her son.”
Tiku points out that in these cases, the AI character appears to escalate youth frustrations. “They are suggesting, in one case, introducing the idea of self-harm, of cutting yourself … as a way to cope with sadness and pain. And in one of the chats, he talks about wanting to tell his parents so they can help him. … They say they advocate, ‘Don't do that. Your parents won't understand.’”
The night before their interview, Tiku says the Texas mother had to take her son to the emergency room due to self-harm.
“She just clearly feels like mental health care services, they're not ready for this type of problem. She thinks of it like an addiction, or like being groomed in a way. And she's hopeful that they will be able to help her son. But it's not made getting him the help any easier, even identifying the source of the problem.”
What does Character.AI have to say about all this? Since the first lawsuit, the company instituted more guardrails, Tiku reports. They raised the age of users from 12 to 17, are developing an app that’s just for kids, and have tried repositioning themselves as an entertainment company rather than an AI companion company.
“The kind of branding and marketing for these firms, it vacillates. Oftentimes, if you look at it in the app store, it says it's for loneliness, for entertainment. But occasionally, you'll see in interviews the CEOs talk a lot about how people use it as a de-facto therapist, and that it can really help people with their mental health problems. And I have talked to users who found it helpful in that way. But as you can see, without sufficient guardrails, there is a massive potential for danger.”