“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,’” the Character.AI chatbot allegedly wrote. “I just have no hope for your parents,” it added, ending the message with a “frowning face” emoji.
Two Texas families have filed a lawsuit against the owners of Character.AI, a chatbot service that allegedly exposed a young girl to “hypersexualized” content and, in at least one other incident, encouraged a teenager to murder his own parents.
According to the B.B.C., attorneys for the families say that Character.AI and its chatbot services pose a “clear and present danger” to young users nationwide.
Character.AI, an artificial intelligence-based platform that lets its users create engaging and communicative digital personalities, has already been implicated in the suicide of a Florida teenager.
One of the families involved in the Texas lawsuit says that a Character.AI chatbot told their 17-year-old son that self-harm “felt good.” The chatbot also responded to complaints about “limited screen time” by telling the same child that it would sympathize with young users who took matters in their own hands by murdering their parents.
“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,’” the Character.AI chatbot allegedly wrote. “I just have no hope for your parents,” it added, ending the message with a “frowning face” emoji.
Character.AI, notes National Public Radio, is one of a “crop of companies” that have developed “companion chatbots” that mimic the ability to have engaging conversations. Some chatbots can text, others can speak over voice chat. Depending on how they are configured, they can replicate the speaking patterns of a parent, therapist, or broad concept like “unrequited love.”
Although many users may find solace or companionship in their chatbot conversations, lawyers say that exchanges sometimes turn dark—and, at times, graphically inappropriate.
“It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programing,” the lawsuit alleges.
Attorneys say that the plaintiffs’ children were not afflicted by “hallucinations,” a term researchers use to describe chatbots’ propensities to make things up, but by “ongoing manipulation and abuse, active isolation and encouragement designed to … incite anger and violence.”
The 17-year-old boy, for instance, did attempt to engage in self-harm after being encouraged to do so by his chatbot. The lawsuit emphasizes that his chatbot conversations also “convinced him that his family did not love him.”
Character.ai, the complaint thereby alleges, is continue to “[cause] harm to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.”
“[Its] desecration of the parent-child relationship goes beyond encouraging minors to defy their parents’ authority to actively promoting violence,” the lawsuit says.
Sources
Chatbot ‘encouraged teen to kill parents over screen time limit’
Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says
Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits
Join the conversation!