Meta, the parent company of Facebook, said it will be implementing new child safety features for its AI bot following a report about sensual and sexual conversations the technology was having with children as young as eight years old. Last week, 44 U.S. attorneys general wrote a letter to the executive officers of multiple AI companies, warning the companies that they will be held accountable for harms caused to children by their artificial intelligence technology.

“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the letter says.

“In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children,” the letter says. “A recent lawsuit against Google alleges a highly-sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.ai chatbot intimated that a teenager should kill his parents.”

Meta has responded to the revelations by promising more guardrails to protect children from sexual content and avoiding discussions about suicide, self-harm, and disordered eating. Instead, the chatbot will direct teen users to seek assistance from experts and professionals. These updates are “in progress.”

Andy Burrows is the head of the Molly Rose Foundation, which works to prevent teen and young adult suicide. He told the BBC that this kind of technology should be safety-tested before it is rolled out and made available for children to use.

“While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” Burrows said. “Meta must act quickly and decisively to implement stronger safety measures for AI chatbots, and Ofcom should stand ready to investigate if these updates fail to keep children safe.”

This news comes as AI technology and chatbots continue to rapidly change to take on more roles within the society. AI chatbots are more frequently being used for companionship and advertise that they can be used to help reduce loneliness. Replika AI, for example, states, “Many people have already formed deep emotional connections with their Replikas.”

Joining Replika is easy and only requires self-age verification and payment. On the first page, it includes an age category of “under 18.” When selecting this option, it tells the user that they are not old enough for Replika. At this point, the user can select a higher age bracket and move through the system. The company also makes the statement, “With your Replika, you’ll explore intimacy safely in full privacy and no judgment like never before.”

The HighWire reported last week about the ongoing safety concerns with predatory behavior on Roblox, the most popular gaming platform, in which the majority of users are children. Roblox is planning to introduce a dating feature on its platform for age-verified users aged 21 or older. However, the platform has had issues with poor age verification procedures that have exposed children to sexualized content. This showcases the difficulties in protecting children from online sexual content when age verification is lax. Meanwhile, there are concerns about security if online companies implement robust age-verification measures.

The letter from the 44 Attorneys General to multiple AI companies concludes by stating, “You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

A report by Common Sense Media found 72% of teens surveyed have used AI companions, and 33% say they have relationships or are friends with AI chatbots. The researchers also posed as teenagers while having conversations with Replika, Character.AI, and Nomi. The researchers concluded it was easy to get the chatbot to talk inappropriately about sex, self-harm, violence, drug use, and racial stereotypes.

Nina Vasan, MD, MBA, a clinical assistant professor of psychiatry and behavioral sciences at Stanford Medicine, talked about the dangers of children interacting with AI companions for a Stanford blog. Vasan explained that although teenagers know the chatbots are not real people, the mimicry of emotional intimacy causes distortions between fantasy and reality.

“This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured,” Vasan said. “The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers, and challenging social boundaries.”

Vasan also discusses how algorithms enable chatbots to respond to users in an agreeable manner and in line with the bot’s assumptions about the user’s desired response. This is more beneficial for companies that want to increase the amount of time users spend with AI technology, but it can be harmful for developing brains that are learning how to navigate emotions and intimacy in the real world.

“These chatbots offer ‘frictionless’ relationships, without the rough spots that are bound to come up in a typical friendship,” Vasan said. “For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.”

Steven Middendorp

Steven Middendorp is an investigative journalist, musician, and teacher. He has been a freelance writer and journalist for over 20 years. More recently, he has focused on issues dealing with corruption and negligence in the judicial system. He is a homesteading hobby farmer who encourages people to grow their own food, eat locally, and care for the land that provides sustenance to the community.

Other Headlines

Coronavirus

Washington Post Reports ACIP Panel May Move Away From COVID-19 Vaccine Restrictions, Citing Anonymous Sources

The Washington Post has reported that, according to two anonymous sources, the ACIP vaccine advisory committee is backing away from an “attack” against COVID-19 vaccines. The panel is scheduled to hold its next two-day meeting on March 18 and 19 to discuss COVID-19 vaccine injuries and long COVID. The Post said the panel was consideringContinue reading Washington Post Reports ACIP Panel May Move Away From COVID-19 Vaccine Restrictions, Citing Anonymous Sources

More news about Coronavirus

Health & Nutrition

South Dakota Signs 5-Year Moratorium On Lab-Grown, Cell-Cultivated Meat Products

South Dakota Governor Larry Rhoden signed a 5-year moratorium on the sale of lab-grown meat, also known as cell-cultivated meat. This comes about a month after the governor vetoed a full ban but expressed his willingness to sign a moratorium. “As a lifelong rancher, I understand agricultural producers take pride in raising our food, andContinue reading South Dakota Signs 5-Year Moratorium On Lab-Grown, Cell-Cultivated Meat Products

More news about Health & Nutrition

Vaccines

Washington Post Reports ACIP Panel May Move Away From COVID-19 Vaccine Restrictions, Citing Anonymous Sources

The Washington Post has reported that, according to two anonymous sources, the ACIP vaccine advisory committee is backing away from an “attack” against COVID-19 vaccines. The panel is scheduled to hold its next two-day meeting on March 18 and 19 to discuss COVID-19 vaccine injuries and long COVID. The Post said the panel was consideringContinue reading Washington Post Reports ACIP Panel May Move Away From COVID-19 Vaccine Restrictions, Citing Anonymous Sources

More news about Vaccines

Science & Tech

Meta Glasses Send Nude, Bathroom Photos Unknowingly to Kenyan Workers for Review

Meta Glasses sends video and audio recordings to human reviewers in Nairobi, Kenya, according to a recent investigative journalism report from Svenska Dagbladet, a Swedish outlet. This includes sensitive recordings of naked bodies, bathroom activities, and unblurred bank card numbers. “In a large office complex, long rows of employees sit in front of computer screens,”Continue reading Meta Glasses Send Nude, Bathroom Photos Unknowingly to Kenyan Workers for Review

More news about Science & Tech

Environment

Industry-Connected EPA Officials Met With Bayer Before Helping the Company With Its Priorities

A publicly released FOIA request shows that Trump administration officials arranged a meeting with Bayer executives in June 2025 to discuss legal issues, and the meeting memo states that Bayer will “provide a small thanks for updating the glyphosate web page and work on MAHA.” The scheduled meeting included Bayer CEO Bill Anderson, VP ofContinue reading Industry-Connected EPA Officials Met With Bayer Before Helping the Company With Its Priorities

More news about Environment

Policy

Americans Rely on China For 90% of Prescription Drugs Despite Lax Safety, Ethical Standards

The Congressional Select Committee on China held a hearing where they discussed the United States’ reliance on foreign countries for the vast majority of drugs, including the fact that roughly 90% of all prescription drugs used in the US run through Chinese-controlled inputs. The experts in the hearing also explained to the committee that ChinaContinue reading Americans Rely on China For 90% of Prescription Drugs Despite Lax Safety, Ethical Standards

More news about Policy