Meta, the parent company of Facebook, said it will be implementing new child safety features for its AI bot following a report about sensual and sexual conversations the technology was having with children as young as eight years old. Last week, 44 U.S. attorneys general wrote a letter to the executive officers of multiple AI companies, warning the companies that they will be held accountable for harms caused to children by their artificial intelligence technology.

“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the letter says.

“In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children,” the letter says. “A recent lawsuit against Google alleges a highly-sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.ai chatbot intimated that a teenager should kill his parents.”

Meta has responded to the revelations by promising more guardrails to protect children from sexual content and avoiding discussions about suicide, self-harm, and disordered eating. Instead, the chatbot will direct teen users to seek assistance from experts and professionals. These updates are “in progress.”

Andy Burrows is the head of the Molly Rose Foundation, which works to prevent teen and young adult suicide. He told the BBC that this kind of technology should be safety-tested before it is rolled out and made available for children to use.

“While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” Burrows said. “Meta must act quickly and decisively to implement stronger safety measures for AI chatbots, and Ofcom should stand ready to investigate if these updates fail to keep children safe.”

This news comes as AI technology and chatbots continue to rapidly change to take on more roles within the society. AI chatbots are more frequently being used for companionship and advertise that they can be used to help reduce loneliness. Replika AI, for example, states, “Many people have already formed deep emotional connections with their Replikas.”

Joining Replika is easy and only requires self-age verification and payment. On the first page, it includes an age category of “under 18.” When selecting this option, it tells the user that they are not old enough for Replika. At this point, the user can select a higher age bracket and move through the system. The company also makes the statement, “With your Replika, you’ll explore intimacy safely in full privacy and no judgment like never before.”

The HighWire reported last week about the ongoing safety concerns with predatory behavior on Roblox, the most popular gaming platform, in which the majority of users are children. Roblox is planning to introduce a dating feature on its platform for age-verified users aged 21 or older. However, the platform has had issues with poor age verification procedures that have exposed children to sexualized content. This showcases the difficulties in protecting children from online sexual content when age verification is lax. Meanwhile, there are concerns about security if online companies implement robust age-verification measures.

The letter from the 44 Attorneys General to multiple AI companies concludes by stating, “You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

A report by Common Sense Media found 72% of teens surveyed have used AI companions, and 33% say they have relationships or are friends with AI chatbots. The researchers also posed as teenagers while having conversations with Replika, Character.AI, and Nomi. The researchers concluded it was easy to get the chatbot to talk inappropriately about sex, self-harm, violence, drug use, and racial stereotypes.

Nina Vasan, MD, MBA, a clinical assistant professor of psychiatry and behavioral sciences at Stanford Medicine, talked about the dangers of children interacting with AI companions for a Stanford blog. Vasan explained that although teenagers know the chatbots are not real people, the mimicry of emotional intimacy causes distortions between fantasy and reality.

“This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured,” Vasan said. “The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers, and challenging social boundaries.”

Vasan also discusses how algorithms enable chatbots to respond to users in an agreeable manner and in line with the bot’s assumptions about the user’s desired response. This is more beneficial for companies that want to increase the amount of time users spend with AI technology, but it can be harmful for developing brains that are learning how to navigate emotions and intimacy in the real world.

“These chatbots offer ‘frictionless’ relationships, without the rough spots that are bound to come up in a typical friendship,” Vasan said. “For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.”

Steven Middendorp

Steven Middendorp is an investigative journalist, musician, and teacher. He has been a freelance writer and journalist for over 20 years. More recently, he has focused on issues dealing with corruption and negligence in the judicial system. He is a homesteading hobby farmer who encourages people to grow their own food, eat locally, and care for the land that provides sustenance to the community.

Other Headlines

Coronavirus

Fauci Advisor Indicted for Conspiracy, Deleting and Concealing Records; Faces 51 Years in Prison

David Morens, a former senior advisor to Dr. Anthony Fauci in the National Institute of Allergy and Infectious Diseases (NIAID), has been indicted by the Department of Justice for conspiracy against the United States, two counts of destruction, alteration, or falsification of records in federal investigations, and two counts of concealment, removal, or mutilation ofContinue reading Fauci Advisor Indicted for Conspiracy, Deleting and Concealing Records; Faces 51 Years in Prison

More news about Coronavirus

Health & Nutrition

Billions Spent, Root Causes Ignored: New Review Challenges the Modern Approach to Cancer

A narrative review published in the Annals of Research in Oncology concludes that the American health care system overspends on cancer research and treatments while failing to consider alternative treatments that could be more effective and cost-efficient. The authors suggest that ultra-processed food, environmental toxins, disrupted microbiomes, chronic stress, and metabolic dysfunction are the primaryContinue reading Billions Spent, Root Causes Ignored: New Review Challenges the Modern Approach to Cancer

More news about Health & Nutrition

Vaccines

No Duty of Care: Ontario Court Dismisses Lawsuit Over 17-Year-Old’s Death 33 Days After COVID Vaccine

The Ontario Court of Appeals has dismissed the case of Dan Hartman, who sued federal Canadian government officials following the death of his 17-year-old son Sean, who died 33 days after receiving the COVID-19 vaccine as a requirement to participate in hockey. The lawsuit alleged the Attorney General of Canada and the Minister of Health,Continue reading No Duty of Care: Ontario Court Dismisses Lawsuit Over 17-Year-Old’s Death 33 Days After COVID Vaccine

More news about Vaccines

Science & Tech

Anthropic Unauthorized Access Investigation Raises Questions About AI Safety Amidst Rapid Development

Anthropic is investigating a reported security breach that allowed a small group of people to gain access to Claude Mythos Preview, the company’s AI software that is too powerful to release to the public. AI models are becoming increasingly capable, and the 2026 International AI Safety Report notes that some hypothetical scenarios pose risks asContinue reading Anthropic Unauthorized Access Investigation Raises Questions About AI Safety Amidst Rapid Development

More news about Science & Tech

Environment

Supreme Court Considers Granting Bayer Protection From Pesticide ‘Failure-To-Warn’ Lawsuits

The U.S. Supreme Court heard oral arguments Monday about whether Bayer can be held liable for state-level failure to warn claims when the company followed EPA labeling guidelines after the federal agency classified glyphosate as “not likely carcinogenic to humans.” The court is not considering the underlying safety profile of glyphosate or Roundup, but theContinue reading Supreme Court Considers Granting Bayer Protection From Pesticide ‘Failure-To-Warn’ Lawsuits

More news about Environment

Policy

Supreme Court Considers Granting Bayer Protection From Pesticide ‘Failure-To-Warn’ Lawsuits

The U.S. Supreme Court heard oral arguments Monday about whether Bayer can be held liable for state-level failure to warn claims when the company followed EPA labeling guidelines after the federal agency classified glyphosate as “not likely carcinogenic to humans.” The court is not considering the underlying safety profile of glyphosate or Roundup, but theContinue reading Supreme Court Considers Granting Bayer Protection From Pesticide ‘Failure-To-Warn’ Lawsuits

More news about Policy