Wednesday, March 18, 2026

The Unreliable Plaintiff: Voice Hearers, Online Radicalization, and the Legal System's Refusal to See Causation A Supplementary Analysis

The prior analysis used the language of procedure: "unreliable plaintiff," "credibility problem," "typicality." But these terms obscure what they describe. The "unreliable plaintiff" is not an abstraction. She is a voice hearer. He is someone whose auditory hallucinations have been captured by online communities designed explicitly to appeal to voice hearers—communities that translate the experience of hearing voices into a political conspiracy narrative, that transform paranoia into shared reality, that convert distress into radicalization. This supplement names what the law refuses to name: social media platforms are hosting dangerous groups that function as radicalization engines for voice hearers. These groups do not merely tolerate schizophrenic members. They are structured for them. Their content is calibrated to resonate with auditory hallucination. Their language mirrors the language of command hallucinations. Their communities provide the social validation that clinical treatment cannot—because the voices, online, are treated as real. This is not speculation. This is the architecture of the platform. And the legal system's refusal to see it is not neutrality. It is complicity. I. Defining the Mechanism: How Radicalization Works for Voice Hearers A. The Translation of Voice Into Narrative Voice hearing is, for many, a terrifying experience. Command hallucinations may instruct self-harm or violence. Auditory hallucinations may comment on the hearer's actions in real time. The experience is isolating precisely because it is unshareable—the voice hearer knows, at some level, that others do not hear what they hear. Online TI communities offer a solution to this isolation: they validate the voices as real. What clinical psychiatry calls auditory hallucination, these communities rename as "voice-to-skull" (V2K) technology. What clinicians understand as persecutory delusion, these communities rename as "gangstalking operation." The voice hearer is not ill. They are targeted. They are not hallucinating. They are being attacked. This translation is immensely powerful. It transforms the unshareable into the shareable. It replaces isolation with community. It replaces stigma with solidarity. And it replaces the possibility of treatment with the certainty of persecution. B. The Algorithmic Amplification Loop Platforms do not merely host these communities. They actively amplify them. The recommendation algorithms that drive engagement are designed to surface content that keeps users on the platform. For a voice hearer who has engaged with TI content once, the algorithm will surface more TI content. And more. And more. This creates a radicalization funnel: Entry: A voice hearer, distressed by their experiences, searches for answers. They encounter TI content that explains their voices as external attack. Validation: The algorithm shows them similar content. Other users validate their experiences. Their voices are treated as real. Deepening: The content becomes more extreme. The conspiracy expands. The persecutors multiply. The technology described becomes more elaborate. Commitment: The voice hearer's identity becomes fused with the TI narrative. They begin producing content themselves. They become evangelists for the reality of gangstalking. Action: For some, the narrative produces action—confrontation with imagined persecutors, attempts to "expose" the operation, violence against perceived attackers. C. The Groups Designed for Voice Hearers The groups that occupy this space are not accidental. Their language, their imagery, their explanatory frameworks are specifically calibrated to resonate with psychotic experience: Voice-to-skull technology maps directly onto auditory hallucination. Directed energy weapons map onto somatic hallucinations—sensations in the body attributed to external attack. Gangstalking operations map onto persecutory delusion—the sense that one is being watched, followed, targeted. Electronic harassment maps onto thought broadcasting—the sense that one's thoughts are accessible to others. A voice hearer encountering this content for the first time experiences something profound: their symptoms have been named. The content confirms what they already suspected—that their experiences are real, external, inflicted. The platform has given them an explanation. That explanation is a lie. But it is a lie that fits. II. The Legal System's Refusal: Why "Unreliable Plaintiff" Means "Voice Hearer Whose Radicalization We Enabled" A. The Credibility Doctrine as Epistemic Violence When the legal system deems a plaintiff "unreliable," it performs an act of epistemic exclusion. The plaintiff is excluded from the community of knowers—their testimony cannot ground knowledge, their experience cannot ground injury, their voice cannot ground claim. For the voice hearer plaintiff, this exclusion is a second radicalization. The first radicalization told them their voices were real. The second radicalization tells them their injuries are not. The platform's algorithm reinforced their delusion. The court's credibility determination reinforces their isolation. The research on epistemic injustice in mental health contexts is clear: individuals with psychosis are systematically discredited as knowers, even when their testimony concerns matters unrelated to their delusional content . A voice hearer may accurately describe what content they consumed, what recommendations they received, what communities they joined. But because they are a voice hearer, their entire testimony becomes suspect. B. The Causation Problem as Willful Blindness The causation problem identified in the prior analysis—the difficulty of proving that platform design caused specific harms—becomes, in this context, an act of willful blindness. Platforms know what their algorithms do. They know that engagement optimization surfaces extreme content. They know that vulnerable users are most susceptible to this content. They have internal studies documenting these effects. But the law permits them not to know. Section 230 immunity rests on a fiction: that platforms are passive conduits for user content, not active architects of user experience. This fiction is unsustainable in light of what we know about algorithmic amplification. But courts maintain it because the alternative—holding platforms liable for the consequences of their design choices—would transform the internet. For the voice hearer radicalized by TI content, this fiction is deadly. The platform did cause their radicalization—not by hosting content, but by designing systems that ensured that content would find them, would keep them, would deepen their engagement. The causation is not speculative. It is engineered. C. The Duty Problem as Moral Failure The duty analysis in the prior article concluded that platforms owe no special duty to voice hearers because they lack physical control over them. This conclusion is legally defensible. It is also morally bankrupt. The special relationship doctrine, as articulated in Trammel, requires physical control or custody to create affirmative duties to protect . But this doctrine was developed in a world without algorithmic amplification—a world where the primary threat to vulnerable individuals was physical proximity, not digital immersion. Extending it to the online context would require courts to recognize that algorithmic control is a form of control—that designing systems to capture and retain attention creates a relationship, and that relationship creates duties. The Trammel court's refusal to find a special relationship between a father and his adult schizophrenic son rested on the absence of "physical control over the behavior of a mentally ill person" . But the father in Trammel was not designing systems to keep his son engaged with content that reinforced his delusions. The father was not optimizing for his son's continued immersion in persecutory narratives. The father was not profiting from his son's distress. Platforms are. And that difference should matter. III. The Empirical Reality: What Platforms Know About Voice Hearers A. Internal Research on Vulnerable Users Documents produced in other litigation have revealed that platforms conduct extensive research on vulnerable users. They know which content triggers distress. They know which recommendation patterns deepen engagement. They know which communities function as radicalization engines. In the TI context, this research would be devastating. Internal studies would show: The correlation between engagement with TI content and increased time on platform The network effects that draw voice hearers from general mental health content into specific TI communities The content moderation failures that permit explicitly dangerous content to remain The algorithmic pathways that surface increasingly extreme material This evidence exists. It would support causation. It would support duty. It would support liability. But it is inaccessible without discovery, and discovery is inaccessible without a lawsuit, and a lawsuit is inaccessible without a plaintiff, and a plaintiff is inaccessible because voice hearers are "unreliable." B. The Content Moderation Gap Platforms' content moderation systems are designed to catch obvious violations: threats, harassment, incitement to violence. They are not designed to recognize when a community is functioning as a radicalization engine for voice hearers. Consider a post that says: "The government is using voice-to-skull technology to torture me. They are broadcasting commands into my brain. I must resist them." This post contains no explicit threat. It does not violate any clear policy. It will not be removed. But for a voice hearer encountering this post, it is validation. It names their experience. It tells them they are not alone. It tells them their voices are real. And it connects them to a community that will deepen their commitment to this narrative. The moderation gap is not a bug. It is a feature of a system designed to maximize engagement. Content that validates voice hearers' experiences keeps them on the platform. Content that keeps them on the platform generates revenue. Content that generates revenue is not removed. IV. The Survivors' Claims: Wrongful Death as the Entry Point A. The Chain of Causation Made Visible The survivors of those killed by radicalized voice hearers occupy a different evidentiary position. Their claims do not require the court to credit the voice hearer's testimony. They require the court to trace a chain: The decedent was a voice hearer with a diagnosed schizophrenia spectrum disorder. The decedent engaged with TI content on social media platforms. The platforms' algorithms recommended increasingly extreme TI content. The decedent's delusional system incorporated this content. The decedent acted on their delusions, resulting in death. The survivors suffered loss. Each step in this chain can be proven through objective evidence: platform records showing content consumption, expert testimony about the relationship between online content and delusional reinforcement, forensic evidence linking delusion to action. B. The Section 230 Obstacle Section 230 remains an obstacle, but wrongful death claims may navigate it more successfully than TI plaintiffs' claims. The argument would be: liability attaches not to the content (which is user-generated and immunized) but to the design of the recommendation algorithm (which is platform-generated and not immunized). Some courts have recognized this distinction. In Force v. Facebook, the court held that Section 230 did not bar claims alleging that Facebook's design features—including its recommendation algorithms—affirmatively contributed to harm . The argument is not that Facebook hosted bad content, but that Facebook designed systems that ensured bad content would find vulnerable users. For survivors of violence committed by radicalized voice hearers, this argument is available. The harm was not caused by any single post, but by the algorithmic architecture that ensured the decedent would encounter increasingly extreme content over time. C. The Duty to Design Safely Products liability law recognizes that manufacturers have a duty to design products safely. When a design defect causes injury, the manufacturer is liable. Social media platforms are not physical products, but they are products nonetheless. Their design choices—including algorithmic choices—create risks. When those risks materialize, they should bear responsibility. For voice hearers, the risk is known. Platforms know that their algorithms can radicalize vulnerable users. They know that TI content functions as a radicalization engine. They know that radicalization can produce violence. Designing systems that continue to amplify this content despite this knowledge is a design defect. The survivors' claim is, at its core, a products liability claim: the platform's design was defective, the defect caused death, and the survivors deserve compensation. V. The Ethical Imperative Revisited: Why Lawyers Must Represent the Unreliable A. The Voice Hearer as Knower The prior article argued that lawyers should bring TI class actions even if they will lose, because the loss itself creates a record. That argument applies with equal force to individual claims by voice hearers—claims that will be dismissed as incredible, claims that will be defeated by credibility determinations, claims that will fail. But there is a deeper argument: the voice hearer is a knower. Their testimony about what they experienced online—what content they saw, what recommendations they received, what communities they joined—is not rendered unreliable by their diagnosis. It is reliable evidence of platform conduct. The fact that they interpret that conduct through a delusional framework does not make their description of the conduct itself delusional. The legal system's conflation of interpretation with perception is the epistemic injustice at the heart of these cases. The voice hearer may be wrong about why they saw certain content. They may be wrong about who is responsible. But they are not wrong about what they saw. And what they saw is the content that radicalized them. B. The Radicalization Narrative as Legal Claim The voice hearer's claim can be framed without relying on the truth of their delusions. The claim is: I am a voice hearer with a diagnosed schizophrenia spectrum disorder. I encountered content on your platform that explained my auditory hallucinations as external attack. Your algorithms ensured I encountered more of this content over time. This content deepened my commitment to a persecutory delusion. This deepening caused me harm—emotional distress, lost treatment opportunities, damaged relationships, lost employment. Your platform's design caused this harm. This claim does not require the court to believe that voice-to-skull technology exists. It requires the court to believe that content about voice-to-skull technology exists, that the platform amplified it, and that amplification caused harm. This is provable. C. The Survivors' Standing The survivors' claims are even stronger. They do not require the court to credit the decedent's delusions. They require the court to trace causation from platform design to violent outcome. This tracing is difficult but not impossible. Expert testimony can establish: The relationship between online radicalization and violent action The specific mechanisms by which TI content reinforces persecutory delusion The role of algorithmic amplification in deepening engagement The foreseeability of violence given platform knowledge This is not speculative. It is the stuff of tort law. VI. Conclusion: The Unreliable Plaintiff as the Only Plaintiff Who Matters The legal system's refusal to hear voice hearers' claims is not neutrality. It is a choice. It is a choice to value procedural regularity over substantive justice. It is a choice to privilege the platform's immunity over the plaintiff's injury. It is a choice to treat "unreliable" as "unworthy." But the voice hearer is the only plaintiff who can bring these claims. They are the ones who experienced the radicalization. They are the ones who know what content they consumed. They are the ones who can testify about how the platform's design affected them. Their unreliability—their diagnosis, their delusions, their difference—is not a reason to exclude them. It is the reason they are here. The survivors of those killed by radicalized voice hearers have their own claims. Those claims are stronger in some ways—the injuries are clearer, the plaintiffs are more sympathetic. But those claims depend on the voice hearers' experience. Without the voice hearer's radicalization, there is no death. Without the voice hearer's testimony about that radicalization, there is no causation. The voice hearer is the unreliable plaintiff. They are also the indispensable plaintiff. And the legal system's refusal to hear them is not just a failure of procedure. It is a failure of justice. References Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press. Trammel v. Bradberry, 256 Ga. App. 412 (Ga. Ct. App. 2002). Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). Tsimploulis, G., et al. (2018). Schizophrenia and Criminal Responsibility: A Systematic Review. The Journal of Nervous and Mental Disease, 206(5), 370-377. Morse, S.J. (2013). Neurologic disorder and criminal responsibility. ScienceDirect. Dressing, H., Foerster, K., & Gass, P. (2011). Are Stalkers Disordered or Criminal? Thoughts on the Psychopathology of Stalking. Psychopathology, 44(5), 277-282. Stewart, G.H. (2020). Gangstalking: A Real Phenomenon or "It's All Just in Your Head"?

No comments:

Post a Comment