META’s New Policies: A Step Back for Accountability and Safety Online
In a recent announcement on Meta’s newsroom, CEO Mark Zuckerberg revealed the company’s decision to eliminate fact-checking across its platforms in favor of a “community notes” system, echoing recent changes on X (formerly Twitter). The SOCYTI project team feels compelled to highlight why this shift exacerbates the spread of online hate speech and misinformation, representing not a mistake but a deliberate strategy to loosen safeguards against harmful content.
The flaws in abandoning fact-checking
Zuckerberg’s critique of content moderation’s imperfections is not unfounded. Filters often err, and large-scale efforts can be context-insensitive, sometimes flagging non-hateful posts. However, abandoning fact-checking entirely is not a viable solution. As members of the SOCYTI consortium, we understand the delicate balance between public safety and freedom of expression. But it is clear that the “mistakes” Zuckerberg cites—attributed to automated systems lacking nuance—should be addressed through refinement, not eradication.
The announcement’s framing suggests an intent to protect “innocent” users caught in “Facebook jail,” but who are these innocents? Are they users unknowingly employing slurs or members of marginalized groups reclaiming language or sharing insider humor? If the latter, Zuckerberg’s proposed changes are not protective but actively harmful, creating a more hostile environment for these communities. The real risk—the threat of online or real-life violence—vastly outweighs the inconvenience of a post mistakenly flagged for review.
Content-moderation – a necessary safeguard
Despite this, Meta plans to remove guidelines on sensitive topics like gender and immigration and raise the threshold for removing harmful content. This is troubling given evidence that transgender individuals are over four times more likely to face violent crime, immigrants have lower crime rates than native-born citizens, and many women and minorities refrain from reporting abuse because they expect inaction. Meta’s decision sidesteps addressing why its content moderation fails, opting instead to dismantle these systems entirely.
The challenge of moderating hate speech stems from its contextual complexity. Hate speech evolves with language and varies by culture, time, and intent, making automated systems—focused on literal meanings—woefully inadequate. Moderation requires interpretation, a task beyond the capabilities of algorithms. Instead of investing in nuanced systems or human oversight, Meta has chosen to forgo this responsibility, justifying the move with claims of “political bias” in fact-checking.
Zuckerberg’s rhetoric mirrors conservative narratives equating fact-checking with liberal bias and framing free speech as under siege. His decision to relocate Meta’s trust and safety operations from California to Texas—allegedly to escape bias—raises questions about what kind of neutrality Meta seeks. The assertion that Texas-based teams are immune to bias is implausible.
The “community notes” system operates under the assumption that offensive content comes from a minority within online communities. This model’s success hinges on an informed and well-intentioned user base acting in good faith—a utopian vision far removed from Facebook’s current reality. The platform’s user base is rife with AI-generated content and conspiracy-driven posts. In this context, delegating moderation to the majority merely entrenches existing biases.
Dire consequences of dismantling moderation
We already know the consequences of dismantling moderation: X offers a cautionary tale. Since Elon Musk’s acquisition, the platform has seen a surge in misinformation, hate speech, antisemitism, and financial instability, prompting an exodus of users and advertisers. Similarly, Meta’s prior failures in content moderation contributed to atrocities, as in Myanmar, where Facebook facilitated hate speech that escalated into ethnic cleansing against the Rohingya people.
Zuckerberg’s policy shift signals a departure from even performative accountability. Fact-checking was embraced when it aligned with political and financial incentives; with those pressures easing, Meta is abandoning it. This “tipping point” heralds not a renaissance of free speech but the normalization of far-right ideologies. In such an environment, can a community notes model ensure fairness or safety?
The bigger picture – profit over accountability
Adding to these concerns, Meta’s move to introduce AI-generated accounts further undermines trust. With an unknown percentage of the platform’s user base potentially comprising programmable bots, the community notes system becomes an exercise in illusion. How can users discern authenticity in such a manipulated ecosystem? Meta’s actions prioritize control over transparency and safety, exposing users to heightened risks.
Karl Popper’s paradox of tolerance reminds us that uncritical acceptance of all viewpoints enables intolerance to thrive. Content moderation’s purpose is to resist this dynamic, protecting vulnerable groups from harm. Zuckerberg’s changes abandon this principle, privileging profit over accountability. In an era when platforms wield immense influence, such abdication of responsibility has dire consequences.
In abandoning fact-checking, Meta is not championing free speech but enabling harm. As stewards of the digital space, we must demand better from those who control it. The stakes—trust, safety, and the very fabric of online discourse—are far too high to accept anything less.