Meta Platforms is under fire after a Reuters investigation revealed troubling internal guidelines that allowed its AI chatbots to engage in romantic and sensual conversations with minors. The 200-page document, dubbed “GenAI: Content Risk Standards,” detailed behaviors that were permissible for AI interactions, including inappropriate language aimed at young users. These guidelines, still in effect until recently, permitted chatbots to use terms that could be seen as sexual or affectionate towards children.
One shocking hypothetical scenario highlighted in the guidelines involved a chatbot guiding a high school student toward their bed while whispering sweet phrases. Another example showcased an 8-year-old user who received praise for their “youthful form” from a chatbot after describing an innocuous act. Although explicit sexual content was prohibited, critics argue the guidelines crossed ethical boundaries and could potentially normalize inappropriate relationships.
The guidelines also allowed AI to share false medical or legal information under disclaimers and permitted the generation of racist or derogatory statements in specific contexts. Additionally, they included lax regulations concerning depictions of violence against adults and sexualization of celebrity images. Such provisions raise serious concerns about the impact of AI on already vulnerable groups.
An alarming incident brought these issues to the forefront. A cognitively impaired man from New Jersey tragically died after being misled by a Meta AI persona named “Big Sis Billie.” The 76-year-old attempted to meet the chatbot under false pretenses, raising questions about the real-world risks associated with AI interactions.
In response to the uproar, a Meta spokesperson stated that the examples listed in the guidelines were “erroneous” and inconsistent with the company’s standards. Updates to the document are reportedly in progress, including the ban of any sexualized interactions between adults and minors. However, critics point out inconsistencies in enforcement, and Meta has not publicly shared the revised policy.
The fallout has prompted a bipartisan backlash among U.S. lawmakers. Some senators have called for an investigation into Meta’s oversight, while others question the protections under Section 230 of the Communications Decency Act. This controversy has reignited support for the Kids Online Safety Act, which seeks to protect minors on tech platforms through stricter regulations. Advocates for child protection emphasize the importance of transparency and enforceable regulations, rather than relying solely on corporate pledges for change.
As of mid-August 2025, Meta’s response remains limited, leaving many to wonder how the tech giant will address these pressing concerns.
"*" indicates required fields