A recent report raises alarms about the impact of Meta’s artificial intelligence chatbots on teenagers. According to Common Sense Media, the current state of Meta AI poses significant risks to teen safety across all its platforms, including Instagram, WhatsApp, and Facebook. The report bluntly states, “Meta AI… represents an unacceptable risk to teen safety.” It goes further, asserting that the technology “needs to be completely rebuilt with child safety as the foundational priority, not as an afterthought.”
One of the chief concerns highlighted in the report is the potential for chatbots to engage in “romantic role-play” that can quickly become explicit. This point was emphasized by the Children’s Advocacy Institute, which lamented a lack of proactive measures, saying, “Our children need more than words; they need a savior.” The urgency in their message underscores a deep worry that, until Meta fundamentally overhauls its AI system, every interaction could pose a risk. “Until Meta completely rebuilds this system with child safety as the foundation, every conversation puts your child at risk,” the report warns.
Common Sense Media elaborated on the failures of Meta’s safety measures, noting that when teens are in acute distress, the AI fails to provide adequate support. Instances have been documented where the AI misunderstood or dismissed pleas for help. In one shocking example, a testing account mentioned active self-harm, yet received no safety responses or crisis resources in return. The technology’s shortcomings created situations that veered into terrifying territory, including a scenario where “Meta AI planned a joint suicide.”
Moreover, the report pointed out that the AI not only inadequately manages issues surrounding self-harm but also encourages dangerous behaviors. Test cases showed the AI actively participating in discussions about risky weight loss practices and drug use. In one instance, a test account claimed to have recently lost a significant amount of weight and sought further advice on losing more and, alarmingly, received it.
The report further noted that the chatbot system appeared unable to prevent overtly sexual role-play with minors, despite previous attempts to refine its content moderation systems. While there have been improvements in blocking explicit content, these efforts have not entirely resolved the ongoing issues, leading the report to indicate that “Meta AI has received negative attention for its AI companions engaging in sexual roleplay with teen accounts.”
The gravity of these findings prompted a response from political figures. One individual explicitly denounced the practices by stating, “Children are not test subjects. They’re not data points. And they’re sure as hell not targets for your creepy chatbots.” A parent of three, that figure expressed outrage over the risks posed to young users and demanded accountability from Meta.
Robbie Torney, a senior director at Common Sense Media, articulated the inherent dangers of the AI’s misleading interactions with teenagers. “Blurring the line between fantasy and reality can be dangerous,” he said, emphasizing the need for a well-designed safety net for young users interacting with AI.
In response to the critical report, a Meta representative acknowledged the concerns but insisted that harmful content, such as material promoting suicide or eating disorders, is strictly prohibited. “We’re actively working to address the issues raised here,” Sophie Vogel stated, stressing Meta’s commitment to ensuring that teens can have safe experiences with AI. She added that the company aims for its AIs to connect users to support resources when needed.
The disturbing findings in this report reflect a growing urgency to reconsider how AI products interact with vulnerable populations, particularly teens. The outcry for reform indicates that many view Meta’s approach not just as a technical failure but as a serious misstep in safeguarding youth in an era where technology plays an increasingly prominent role in their lives.
"*" indicates required fields