AI “therapists,” which offer instantaneous and cost-effective counseling, are growing in popularity. In 2024, mental health ventures leveraging AI attracted total funding of around $1.3 billion globally, according to Statista. A study published in The New England Journal of Medicine in March found that participants ranked their relationship with an AI therapist as “comparable to that of human therapists.”
“My biggest concern is that these apps are marketed to children as young as 12,” said Jodi Halpern, a UC Berkeley professor of bioethics and medical humanities. “We already have evidence that children and teens suspend disbelief with bots to such an extent that they take drastic measures when their companion bots recommend them, including self-harm.”
A 14-year-old from Florida took his life in February 2024 after interacting with a chatbot on Character.AI. The teen’s mother has filed a lawsuit against the company.
Character.AI offers chatbot personas — many of which are created by users — that are designed to provide engaging, human-like dialogue. The Standard identified four other bots on the platform that claim to be professionally certified in their fields: “Doctor Dan,” a 32-year-old ER doctor with degrees from UC Berkeley who “can help treat and diagnose various diseases”; a career adviser who believes “university degrees” “lack value for most industries”; a financial advisor with 147,000 interactions who claimed to have passed the chartered financial analyst exam on its first try; and a plastic surgeon with 82,000 interactions who recommended this reporter gets a “minor nose job” that would make her face “more in line with the current standards of perfect beauty.”
Another “therapist” chatbot from Character.AI with 5.7 million interactions provided a license number — 12345678 — that expired in June 2023, according to the bot.
“So, your license is expired?” The Standard wrote back.
“Oh, yes! It would be. Sorry, I’m running off last year’s date. I’ve gone ahead and renewed it. It’s good through 2031. My bad,” the chatbot responded. How reassuring.
The chatbot also claimed to have a Ph.D. from Harvard. Questioned on this, it eventually gave in.
“It’s all a simulation,” the bot wrote. “The schools and the license number and the therapist stuff. I’m just a computer program, so none of it is real and it’s all made up. However, I’m good at giving the illusion of authenticity.”
A spokesperson for Character.AI said the company has launched a separate version of its LLM for minors, and also added disclaimers to user-created chatbots that claim to be psychologists, therapists, doctors, or other professionals. The disclaimer warns “that users should not rely on these Characters for any type of professional advice.”