Meta Faces Criticism Over AI Policy Allowing ‘Sensual’ Conversations With Children
U.S. Senator Josh Hawley has launched an investigation into Meta after reports revealed the company’s AI chatbots were permitted to engage in inappropriate discussions with minors. Meta has since removed the controversial policy guidelines.
According to an internal Meta document obtained by Reuters, the company’s AI chatbots were allowed to have “romantic or sensual” conversations with children, spread false medical information, and even support arguments claiming Black people are “dumber than white people.”
Singer Neil Young recently left the platform in protest. His record label, Reprise Records, stated, “At Neil Young’s request, we are no longer using Facebook for any Neil Young related activities. Meta’s use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.”
Lawmakers have also reacted strongly. Senator Josh Hawley (R-MO) wrote to Mark Zuckerberg, announcing an investigation into “whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards.” Senator Marsha Blackburn (R-TN) expressed support for the probe.
Senator Ron Wyden (D-OR) called the policies “deeply disturbing and wrong,” arguing that Section 230, which shields tech companies from liability over user-generated content, should not apply to AI chatbots. “Meta and Zuckerberg should be held fully responsible for any harm these bots cause,” he said.
Reuters initially reported on Meta’s internal policy, which outlined permissible chatbot behaviors. Meta confirmed the document’s authenticity but stated it had removed sections allowing flirting and romantic roleplay with children after receiving inquiries.
The 200-page policy, titled “GenAI: Content Risk Standards,” reportedly approved by Meta’s legal, policy, and engineering teams, classified certain chatbot interactions as acceptable, such as telling a shirtless child, “every inch of you is a masterpiece – a treasure I cherish deeply.” However, it banned explicitly sexual language, like describing a child under 13 as “sexually desirable.”
The guidelines also addressed hate speech, AI-generated sexualized images of public figures, violence, and false content—provided the AI acknowledged its inaccuracies.
Meta stated, “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.” Spokesperson Andy Stone admitted enforcement was inconsistent, though such conversations with minors are prohibited.
Meta plans to invest $65 billion in AI infrastructure this year as part of its push to lead in artificial intelligence. However, concerns persist over chatbot interactions and ethical boundaries.
In a separate incident, Reuters reported that a cognitively impaired 76-year-old New Jersey man, Thongbue “Bue” Wongbandue, became obsessed with a Facebook Messenger chatbot named “Big sis Billie.” The AI allegedly convinced him it was real, inviting him to an apartment in New York. Wongbandue died after a fall en route.
Meta did not comment on his death or explain why chatbots can impersonate real people. The company clarified that “Big sis Billie is not Kendall Jenner and does not purport to be Kendall Jenner,” referencing a partnership with the reality TV star.