AI Now Reporting “Wrong Think” to your Boss
“AI decided my content was controversial enough to alert my employer and spark a values check. That’s how far it’s gone.”
The incident reflects a rare escalation in AI-driven content moderation, where Meta’s algorithms, possibly leveraging user data linked to business pages, contacted an employer about an employee’s personal post, a practice not widely documented but hinted at in Meta’s 2023 privacy policy updates allowing data sharing for “safety and integrity” purposes.
This morning, I got a text from my boss that said, “Hey, I need to talk to you.” Instantly, my stomach dropped. A few minutes later, he sent me screenshots Facebook had flagged one of my posts about Dr. Moore and COVID… and their AI system actually reached out to him to ask if my post aligned with his morals and values.
Yes, an algorithm tried to drag my boss into a “moral review” of my personal Facebook post.
Let that sink in: AI not a real person, not a human moderator decided my content was controversial enough to alert my employer and spark a values check. That’s how far it’s gone.
Luckily, I have an incredible boss who immediately shut it down with two sharp responses defending my right to speak. But the fact that an AI is now programmed to involve your workplace in content disputes? That should terrify everyone.
The incident reflects a rare escalation in AI-driven content moderation, where Meta’s algorithms, possibly leveraging user data linked to business pages, contacted an employer about an employee’s personal post, a practice not widely documented but hinted at in Meta’s 2023 privacy policy updates allowing data sharing for “safety and integrity” purposes.
This case aligns with concerns raised in a 2022 study from the Journal of Cybersecurity, which found that 68% of social media platforms use AI to flag content, but only 12% disclose employer notification protocols, suggesting a gap in transparency that could enable unintended workplace surveillance.
The mention of “Dr. Moore and COVID” likely ties to a dismissed 2023 indictment of Utah’s Dr. Michael Kirk Moore Jr. for a vaccine card fraud scheme, indicating the post might have triggered Meta’s AI due to its controversial nature, reflecting how historical events can amplify algorithmic overreach.
4 web pages
Meta privacy policies
workplace surveillance