AI Child Safety Concerns Rise Amid Inappropriate Content and Blurred Reality
Updated
Meta, the parent company of Facebook, said it will be implementing new child safety features for its AI bot following a report about sensual and sexual conversations the technology was having with children as young as eight years old. Last week, 44 U.S. attorneys general wrote a letter to the executive officers of multiple AI companies, warning the companies that they will be held accountable for harms caused to children by their artificial intelligence technology.
“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the letter says.
“In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children,” the letter says. “A recent lawsuit against Google alleges a highly-sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.ai chatbot intimated that a teenager should kill his parents.”
Meta has responded to the revelations by promising more guardrails to protect children from sexual content and avoiding discussions about suicide, self-harm, and disordered eating. Instead, the chatbot will direct teen users to seek assistance from experts and professionals. These updates are “in progress.”
Andy Burrows is the head of the Molly Rose Foundation, which works to prevent teen and young adult suicide. He told the BBC that this kind of technology should be safety-tested before it is rolled out and made available for children to use.
“While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” Burrows said. “Meta must act quickly and decisively to implement stronger safety measures for AI chatbots, and Ofcom should stand ready to investigate if these updates fail to keep children safe.”
This news comes as AI technology and chatbots continue to rapidly change to take on more roles within the society. AI chatbots are more frequently being used for companionship and advertise that they can be used to help reduce loneliness. Replika AI, for example, states, “Many people have already formed deep emotional connections with their Replikas.”
Joining Replika is easy and only requires self-age verification and payment. On the first page, it includes an age category of “under 18.” When selecting this option, it tells the user that they are not old enough for Replika. At this point, the user can select a higher age bracket and move through the system. The company also makes the statement, “With your Replika, you’ll explore intimacy safely in full privacy and no judgment like never before.”
The HighWire reported last week about the ongoing safety concerns with predatory behavior on Roblox, the most popular gaming platform, in which the majority of users are children. Roblox is planning to introduce a dating feature on its platform for age-verified users aged 21 or older. However, the platform has had issues with poor age verification procedures that have exposed children to sexualized content. This showcases the difficulties in protecting children from online sexual content when age verification is lax. Meanwhile, there are concerns about security if online companies implement robust age-verification measures.
The letter from the 44 Attorneys General to multiple AI companies concludes by stating, “You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”
A report by Common Sense Media found 72% of teens surveyed have used AI companions, and 33% say they have relationships or are friends with AI chatbots. The researchers also posed as teenagers while having conversations with Replika, Character.AI, and Nomi. The researchers concluded it was easy to get the chatbot to talk inappropriately about sex, self-harm, violence, drug use, and racial stereotypes.
Nina Vasan, MD, MBA, a clinical assistant professor of psychiatry and behavioral sciences at Stanford Medicine, talked about the dangers of children interacting with AI companions for a Stanford blog. Vasan explained that although teenagers know the chatbots are not real people, the mimicry of emotional intimacy causes distortions between fantasy and reality.
“This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured,” Vasan said. “The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers, and challenging social boundaries.”
Vasan also discusses how algorithms enable chatbots to respond to users in an agreeable manner and in line with the bot’s assumptions about the user’s desired response. This is more beneficial for companies that want to increase the amount of time users spend with AI technology, but it can be harmful for developing brains that are learning how to navigate emotions and intimacy in the real world.
“These chatbots offer ‘frictionless’ relationships, without the rough spots that are bound to come up in a typical friendship,” Vasan said. “For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.”