Meta, the parent company of Facebook, said it will be implementing new child safety features for its AI bot following a report about sensual and sexual conversations the technology was having with children as young as eight years old. Last week, 44 U.S. attorneys general wrote a letter to the executive officers of multiple AI companies, warning the companies that they will be held accountable for harms caused to children by their artificial intelligence technology.

“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the letter says.

“In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children,” the letter says. “A recent lawsuit against Google alleges a highly-sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.ai chatbot intimated that a teenager should kill his parents.”

Meta has responded to the revelations by promising more guardrails to protect children from sexual content and avoiding discussions about suicide, self-harm, and disordered eating. Instead, the chatbot will direct teen users to seek assistance from experts and professionals. These updates are “in progress.”

Andy Burrows is the head of the Molly Rose Foundation, which works to prevent teen and young adult suicide. He told the BBC that this kind of technology should be safety-tested before it is rolled out and made available for children to use.

“While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” Burrows said. “Meta must act quickly and decisively to implement stronger safety measures for AI chatbots, and Ofcom should stand ready to investigate if these updates fail to keep children safe.”

This news comes as AI technology and chatbots continue to rapidly change to take on more roles within the society. AI chatbots are more frequently being used for companionship and advertise that they can be used to help reduce loneliness. Replika AI, for example, states, “Many people have already formed deep emotional connections with their Replikas.”

Joining Replika is easy and only requires self-age verification and payment. On the first page, it includes an age category of “under 18.” When selecting this option, it tells the user that they are not old enough for Replika. At this point, the user can select a higher age bracket and move through the system. The company also makes the statement, “With your Replika, you’ll explore intimacy safely in full privacy and no judgment like never before.”

The HighWire reported last week about the ongoing safety concerns with predatory behavior on Roblox, the most popular gaming platform, in which the majority of users are children. Roblox is planning to introduce a dating feature on its platform for age-verified users aged 21 or older. However, the platform has had issues with poor age verification procedures that have exposed children to sexualized content. This showcases the difficulties in protecting children from online sexual content when age verification is lax. Meanwhile, there are concerns about security if online companies implement robust age-verification measures.

The letter from the 44 Attorneys General to multiple AI companies concludes by stating, “You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”

A report by Common Sense Media found 72% of teens surveyed have used AI companions, and 33% say they have relationships or are friends with AI chatbots. The researchers also posed as teenagers while having conversations with Replika, Character.AI, and Nomi. The researchers concluded it was easy to get the chatbot to talk inappropriately about sex, self-harm, violence, drug use, and racial stereotypes.

Nina Vasan, MD, MBA, a clinical assistant professor of psychiatry and behavioral sciences at Stanford Medicine, talked about the dangers of children interacting with AI companions for a Stanford blog. Vasan explained that although teenagers know the chatbots are not real people, the mimicry of emotional intimacy causes distortions between fantasy and reality.

“This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured,” Vasan said. “The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers, and challenging social boundaries.”

Vasan also discusses how algorithms enable chatbots to respond to users in an agreeable manner and in line with the bot’s assumptions about the user’s desired response. This is more beneficial for companies that want to increase the amount of time users spend with AI technology, but it can be harmful for developing brains that are learning how to navigate emotions and intimacy in the real world.

“These chatbots offer ‘frictionless’ relationships, without the rough spots that are bound to come up in a typical friendship,” Vasan said. “For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.”

Steven Middendorp

Steven Middendorp is an investigative journalist, musician, and teacher. He has been a freelance writer and journalist for over 20 years. More recently, he has focused on issues dealing with corruption and negligence in the judicial system. He is a homesteading hobby farmer who encourages people to grow their own food, eat locally, and care for the land that provides sustenance to the community.

Other Headlines

Coronavirus

Senator Ron Johnson Says Biden Administration Failed To Immediately Warn About Stroke Risk From Pfizer Vaccine

Senator Ron Johnson received nearly 2,000 pages of records showing that there was a known safety concern associated with the Pfizer-BioNTech bivalent COVID-19 vaccine that was not disclosed to the public. Senator Johnson alleges that the Biden administration downplayed the statistically significant risk of ischemic stroke while simultaneously reviewing the potential harms without notifying theContinue reading Senator Ron Johnson Says Biden Administration Failed To Immediately Warn About Stroke Risk From Pfizer Vaccine

More news about Coronavirus

Health & Nutrition

Assisted Suicide Legal in 13 States as Some Patients Cite Financial Pressure and Burden on Family

Programs like Medical Aid in Dying (MAID) or Death With Dignity (DWD) are being approved across the country now, with New York becoming the 13th state to legalize medically assisted suicide. Oregon was the first state to implement a program nearly three decades ago, in 1997. The law is controversial as people are concerned thatContinue reading Assisted Suicide Legal in 13 States as Some Patients Cite Financial Pressure and Burden on Family

More news about Health & Nutrition

Vaccines

Senator Ron Johnson Says Biden Administration Failed To Immediately Warn About Stroke Risk From Pfizer Vaccine

Senator Ron Johnson received nearly 2,000 pages of records showing that there was a known safety concern associated with the Pfizer-BioNTech bivalent COVID-19 vaccine that was not disclosed to the public. Senator Johnson alleges that the Biden administration downplayed the statistically significant risk of ischemic stroke while simultaneously reviewing the potential harms without notifying theContinue reading Senator Ron Johnson Says Biden Administration Failed To Immediately Warn About Stroke Risk From Pfizer Vaccine

More news about Vaccines

Science & Tech

Meta Found Liable For Design Flaws In California Case; Concealing Child Safety Concerns In NM

Meta and Google were found liable in a landmark California lawsuit alleging that the social media application design is defective in that it harms the developing brains of children and teenagers. Meta and Google are required to pay $3 million each as a result of the verdict. This is the first time a jury hasContinue reading Meta Found Liable For Design Flaws In California Case; Concealing Child Safety Concerns In NM

More news about Science & Tech

Environment

EPA Privately Warned About “Grave Threat” In Roseland, LA Chemical Fire, Contradicting Public Statement

New FOIA documents show that the EPA was concerned about a “grave threat to human health” in the aftermath of the Smitty’s Supply fire in Roseland, LA, despite public press releases stating there was “no immediate threat.” Bray Fisher, the On-Scene Coordinator for the EPA, submitted an August 27 request to increase the emergency fundingContinue reading EPA Privately Warned About “Grave Threat” In Roseland, LA Chemical Fire, Contradicting Public Statement

More news about Environment

Policy

Assisted Suicide Legal in 13 States as Some Patients Cite Financial Pressure and Burden on Family

Programs like Medical Aid in Dying (MAID) or Death With Dignity (DWD) are being approved across the country now, with New York becoming the 13th state to legalize medically assisted suicide. Oregon was the first state to implement a program nearly three decades ago, in 1997. The law is controversial as people are concerned thatContinue reading Assisted Suicide Legal in 13 States as Some Patients Cite Financial Pressure and Burden on Family

More news about Policy