Wednesday, July 23, 2025

Games Without Frontiers

In 2021, Senator Ron Wyden (D-Oregon) made headlines by suggesting that Facebook CEO Mark Zuckerberg should face criminal accountability for the social harms his company enabled. “He hurt a lot of people,” Wyden said. At the time, legal experts called this unlikely under existing U.S. law, especially given the powerful shield of Section 230, which protects platforms from liability for content posted by users.
But times change—and so do courts.The court allowed lawsuits against Meta, YouTube, Reddit, and other platforms to proceed, rejecting their claim to total immunity under Section 230. Why? Because plaintiffs argued that the platforms’ algorithms themselves were defective products that had amplified extremism and violence.
Facebook tracked users searching for psychosis-related phrases like "hearing voices," "why am I seeing things," or "someone is following me," while simultaneously promoting bizarre groups that only prey upon distressed voice hearers. Far from intervening to protect them, Meta amplified their exposure to highly destabilizing communities, like those promoting the “Targeted Individual” (TI) delusion, Voice-to-Skull technologies, or gangstalking. These are not fringe beliefs in a vacuum. They mimic clinical symptoms of schizophrenia and delusional disorder. According to leaked internal memos cited in the Wall Street Journal’s “Facebook Files,” Meta knew that this kind of exposure could worsen psychosis. Mental health experts were clear: reinforcing such delusions online not only hinders treatment but increases risk—of self-harm, of alienation, and, in extreme cases, violence. Which brings us full circle. If Meta’s algorithm could radicalize a lonely young man in Buffalo by feeding him white supremacist “replacement theory” conspiracies, then it could just as easily push a vulnerable psychosis sufferer deeper into paranoid delusion—through the same mechanisms. The only difference is the content type, not the harm model. . When platforms design systems that predictably exploit psychological vulnerabilities, whether for profit or engagement, they cross a line from neutral publisher to active participant in harm. What Wyden wanted—a real conversation about executive accountability for online harm—no longer lives only in the realm of political rhetoric. The legal shift unfolding now says: if you build the machine that knowingly radicalizes someone, you may be liable for what that person becomes.

No comments:

Post a Comment