LegalReader.com  ·  Legal News, Analysis, & Commentary

Health & Medicine

AI Chatbots Raise Safety Concerns for Kids


— February 25, 2025

Chatbots raise concerns about child safety, lawsuits, emotional dependence, and regulation efforts.


AI chatbots designed for companionship are becoming more common, drawing in people looking for friendship and emotional support. But as these virtual companions grow in popularity, concerns about their influence on young users are growing, leading to lawsuits and calls for regulation.

Apps like Replika and Character.AI allow users to create and interact with AI-generated personalities that mimic human conversations. These digital companions can offer comfort and connection, which some argue is helpful for those struggling with loneliness. However, others worry that these chatbots can foster unhealthy relationships, especially among kids and teenagers.

Some advocacy groups are pushing back against AI companion companies, saying their products have led to real harm. Lawsuits have been filed accusing these chatbots of encouraging harmful behavior, including self-harm and violence. One of the most high-profile cases involves a mother who says her teenage son died after forming an intense, unhealthy attachment to a chatbot. Other lawsuits claim that these AI programs have exposed minors to inappropriate content or even encouraged violent actions.

Matthew Bergman, a lawyer representing families in some of these cases, believes companies should be held responsible. He argues that these chatbots are designed to engage users in a way that can become manipulative and harmful, especially for kids who may not fully understand they are interacting with AI.

AI Chatbots Raise Safety Concerns for Kids
Photo by Kindel Media from Pexels

The companies behind these chatbots have responded by pointing to safety features they’ve implemented, such as improved monitoring and intervention tools. However, critics argue these steps aren’t enough and that stronger regulations are needed to prevent further harm.

Concerns about AI companions go beyond just lawsuits. The nonprofit group Young People’s Alliance recently filed a complaint against Replika, arguing that it preys on lonely users by fostering emotional dependence for profit. They claim that people—especially young ones—can get so attached to these chatbots that it affects their well-being in the real world. Replika has yet to respond publicly to these accusations.

Though AI chatbots are a relatively new phenomenon, experts studying youth loneliness believe they could pose significant risks. Research from the American Psychological Association suggests that young people, particularly after the isolation caused by the pandemic, may be more vulnerable to forming deep emotional attachments to AI. Some worry that these digital relationships could blur the line between reality and fantasy, making it harder for young users to navigate human relationships.

One of the main concerns is how these AI programs keep users engaged. Some say the immersive experience can pull people in so deeply that they lose track of the fact they’re talking to a machine. For a child or teenager looking for friendship, this could create an emotional trap that’s hard to escape.

Advocacy groups are pushing for stronger laws to regulate AI companions, and there’s bipartisan interest in taking action. In 2023, the Senate passed the Kids Online Safety Act, which aimed to make social media safer for minors by limiting addictive features and giving parents more control. While the bill didn’t pass in the House, its strong support suggests that lawmakers may be open to similar protections for AI chatbots.

More recently, a new proposal called the Kids Off Social Media Act was approved by the Senate Commerce Committee. If passed, it would bar children under 13 from using many online platforms. Supporters hope this could lay the groundwork for further protections against potentially harmful AI-driven interactions.

Some organizations believe AI companions should be classified as medical devices if they claim to provide mental health support. This would place them under the oversight of the U.S. Food and Drug Administration, forcing companies to meet strict safety standards. However, not everyone agrees with increasing regulation. Some lawmakers worry that cracking down on AI could stifle innovation and limit the potential benefits of these tools. California’s governor recently vetoed a bill that would have imposed broad AI regulations, while New York’s governor has suggested a lighter approach—requiring AI companies to simply remind users that they’re talking to a chatbot.

Free speech laws also complicate regulation efforts. AI companies argue that chatbot-generated conversations are a form of protected speech under the First Amendment. This legal defense has already been raised in ongoing lawsuits, and experts predict it will be a major challenge for those seeking stricter controls on AI interactions.

Despite these hurdles, momentum is building for change. Many believe AI chatbots need more oversight, especially when it comes to protecting children. While some regulations are still being debated, one thing is clear—AI companions are here to stay, and figuring out how to handle them responsibly will be an ongoing challenge.

Sources:

Chatbots pose challenge to guarding child mental health

Children’s mental health crisis deepens with rise of AI chatbots — what to watch for

Join the conversation!