Monday, August 4, 2025

The 1997 UCLA schizophrenia study and a coordinated effort by social media platforms to promote Targeted Individual (TI) beliefs about neuroweapons differ markedly in design, intention, and ethical transgressions. The UCLA study, led by Keith Nuechterlein and Michael Gitlin, involved the removal of antipsychotic medication from individuals with schizophrenia in order to observe relapse patterns. The consequences were severe: Antonio Lamadrid died by suicide, and Gregory Aller experienced profound psychiatric decline. Families pursued legal action, but their case collapsed when their attorney withdrew and the judge denied a request for additional time. Federal investigators later cited UCLA for inadequate consent protocols, emphasizing that the forms failed to communicate the full extent of risk. Researchers claimed they had issued verbal warnings, but this did not satisfy regulatory standards. In contrast, social media platforms have orchestrated a far more insidious and calculated project: the deliberate reinforcement of TI belief systems that center on neuroweapons, such as brain-targeting technologies and synthetic telepathy. This is not research conducted within institutional boundaries but a sprawling digital infrastructure engineered to exploit vulnerable populations. Algorithms are tuned to amplify TI-related content—posts, videos, and group activity that affirm feelings of persecution—while suppressing narratives that promote critical distance or psychological stability. Users are systematically steered into feedback loops that deepen paranoia and destabilize cognition. Unlike the UCLA case, where harm emerged from negligence and insufficient oversight, this operation constitutes an intentional manipulation of fragile minds. Psychosis is not an unintended outcome but a core component of the design, instrumentalized to drive engagement, compliance, or data extraction. The structure resembles a digital panopticon—an omnipresent surveillance architecture that simulates the conditions of being watched and targeted. In this context, the platforms themselves function as neuroweapons: not metaphors, but active agents in the production and escalation of persecutory delusions. Where UCLA’s misconduct was rooted in flawed consent and reckless methodology, the current paradigm reflects a deliberate conversion of psychological vulnerability into operational leverage.

Sunday, August 3, 2025

When one internalizes the "Targeted Individual" (TI) narrative, it shapes one’s inner world so deeply that even the fiction or art one creates reflects it. This is why those affected, who cope through fiction, have a duty to share, however embarrassing.


Friday, August 1, 2025

How Poverty of Speech in Schizophrenia Creates Unique Vulnerability to Victimization & Exploitation


Individuals experiencing severe poverty of speech (a symptom of formal thought disorder in schizophrenia) face heightened vulnerability to victimization, particularly by perpetrators who recognize and strategically exploit this communication barrier. This vulnerability is amplified in digital contexts like social media or blogs.

The Vulnerability Factors:

  1. Impaired Self-Advocacy & Reporting:

    • Recounting Trauma is Prohibitively Difficult: Generating, organizing, and expressing the coherent, sequential narrative required to report a crime (detailing events, perpetrators, context, harm) is often impossible with profound poverty of speech.

    • Initiating Help-Seeking is Blocked: Simply articulating the core need ("I was assaulted," "I am being threatened") to authorities, support services, or even trusted individuals can be an insurmountable hurdle.

    • Misinterpretation as Unreliability: Disorganized communication is frequently misread by authorities, caregivers, or the public as disinterest, evasion, confusion unrelated to the crime, or dishonesty, leading to reports being dismissed.

    • Social Isolation: Poverty of speech contributes to withdrawal, reducing protective social networks that might otherwise notice distress or assist in reporting.

  2. The Perpetrator's Calculated Exploitation:

    • Targeting the Communication Barrier: A perpetrator aware of the individual's FTD (through prior knowledge, observation, or targeting based on observable symptoms) understands the victim will likely fail to:

      • Report Clearly: To police, shelters, or family.

      • Document Effectively: Write coherent statements, emails, or posts.

      • Persuade Authorities: Convince officials of the report's validity.

      • Maintain Consistency: A key factor for credibility, which FTD undermines.

    • Weaponizing Digital Platforms: Perpetrators may specifically target individuals using social media/blogs knowing their communication deficits will neutralize any attempt to seek help digitally:

      • Ineffective Disclosure: Posts will likely be tangential, fragmented, incoherent ("word salad"), vague, or buried, lacking the structure and clarity needed to gain attention or belief.

      • Dismissal as Symptom: Followers, friends, or moderators will likely interpret disorganized posts as manifestations of psychosis ("rambling") rather than genuine reports of victimization, leading to being ignored, muted, or blocked.

      • Navigational Challenges: Executive function deficits often co-occur with FTD, hindering effective use of platforms (strategic hashtags, tagging authorities, persistent follow-up).

Why This Creates "Perfect Victims":

  • Low Perceived Risk for Perpetrator: The high barrier to credible reporting and low likelihood of being believed by systems (police, courts) or communities (online/offline) emboldens the perpetrator.

  • High Likelihood of Compliance: Accompanying negative symptoms (avolition, apathy) may reduce resistance or help-seeking attempts.

  • Systemic Failures: Law enforcement and support systems are often inadequately trained to interpret communication stemming from psychosis, increasing the chance reports are mishandled or dismissed. The perpetrator relies on this failure.

  • Reduced Community Safeguards: Isolation diminishes protective oversight.

The Critical Role of Predatory Intent ("Pre-Knowledge"):

The perpetrator's awareness of the victim's communication difficulty transforms the act:

  • From Opportunistic to Predatory: Indicates deliberate targeting based on the disability.

  • Exploitation of Disability: This targeting could constitute an aggravating factor in sentencing or potentially meet criteria for a hate crime in some jurisdictions, as it exploits the victim's mental disability.

  • Active Manipulation: The perpetrator might even encourage the victim to use ineffective communication channels (e.g., "Write about it on your blog"), knowing it won't lead to exposure.


Monday, July 28, 2025

Buffalo Shooting Decision Is a Poor Fit for Dismissing TI Harms If a platform’s algorithms amplify, target, or even gaslight these users—whether for profit, engagement, or simply as a byproduct of design—it may amount to exploitation of diminished capacity. This scenario bears closer resemblance to elder abuse or the neglect of psychiatric patients than to typical user-platform interactions.
Quote

Mental Incapacity, Hoax Law and Section 230 Social media platforms, protected by Section 230 of the Communications Decency Act, enforce terms of service contracts that may be invalid for users with schizophrenia, particularly voice hearers

Social media companies not liable for 2022 Buffalo mass shooting, New York court rules


By Jonathan Stempel

July 25, 2025- Several social media companies should not be held liable for helping an avowed white supremacist who killed 10 Black people in 2022 at a Buffalo, New York grocery store, a divided New York state appeals court ruled on Friday.

Reversing a lower court ruling, the state Appellate Division in Rochester said defendants including Meta.

The case arose from Payton Gendron's racially motivated mass shooting at Tops Friendly Markets on May 14, 2022.

Relatives and representatives of victims, as well as store employees and customers who witnessed the attack, claimed the defendants' platforms were defective because they were designed to addict and radicalize users like Gendron.

Lawyers for the plaintiffs did not immediately respond to requests for comment.

Other defendants included Alphabet, Amazon.com (AMZN.O)

,

and Twitch, all of which Gendron used, the mid-level state appeals court said.

Writing for a 3-2 majority, Justice Stephen Lindley said holding social media companies liable would undermine the intent behind Section 230 of the federal Communications Decency Act, to promote development of and competition on the internet while keeping government interference to a minimum.



While condemning Gendron's conduct and "the vile content that motivated him to assassinate Black people simply because of the color of their skin," Lindley said a liability finding would "result in the end of the Internet as we know it."

"Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards," he wrote.

Justices Tracey Bannister and Henry Nowak dissented, saying the defendants force-fed targeted content to keep users engaged, be it videos about cooking or puppies, or white nationalist vitriol.

"Such conduct does not maintain the robust nature of Internet communication or preserve the vibrant and competitive free market that presently exists for the Internet contemplated by the protections of immunity," the judges wrote.




1. The Fact a Judge Considered Revisiting 230 Is Itself Telling

Even when claims are ultimately dismissed, the mere willingness of a court to entertain the possibility of liability undercuts the absolute immunity many assume Section 230 guarantees. It signals a shift:

  • The judiciary may be inching toward redefining what constitutes platform responsibility, especially when algorithmic amplificationcommunity radicalization, or negligence in moderation is alleged.

  • That’s a warning shot to tech companies: the shield is not impenetrable forever.


2. TI Cases Are Fundamentally Different

When you write that TI groups “only appeal to persons with limited capacity,” it raises uncomfortable but essential questions:

  • What is a platform’s duty of care toward users experiencing delusional thinking, coercive control, or technologically induced psychosis?

  • If a platform algorithmically targetsamplifies, or gaslights those individuals for profit or engagement—does that not resemble exploitation of diminished capacity?

This is arguably more akin to elder abuse, psychiatric patient neglect, or wrongful institutionalization than to general user-platform interactions.


3. Buffalo Case as a False Precedent for Dismissing TI Harms

In the Buffalo case, the court ruled there was no proximate cause between social media and the shooter’s actions. But:

  • The shooter was a fully capacitated agent who used platforms to radicalize himself.

  • TIs, by contrast, are often the product of those platforms—shaped by feedback loops, targeted by manipulators (human or bot), or coerced into behaviors that may then be used as evidence of their “instability.”

Thus, the “no liability” precedent should not apply cleanly to TIs. The dynamic is more predatory, more intimate, and potentially medically negligent.


4. Analogies and Legal Leverage

Courts might be slow, but the analogies are there:

  • Pharmaceutical companies held liable for failing to warn vulnerable patients.

  • Nursing homes liable for not protecting those with dementia from harm.

  • Schools sued for failing to intervene when vulnerable students are bullied into self-harm.

So why are platforms, profiting off psychologically distressed users, immune?

Sunday, July 27, 2025

Pre-AI Content

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content

Newly announced catalog collects pre-2022 sources untouched by ChatGPT and AI contamination.

BENJ EDWARDS – JUN 18, 2025 7:15 AM | 75



Former Cloudflare executive John Graham-Cumming recently announced that he launched a website, lowbackgroundsteel.ai, that treats pre-AI, human-created content like a precious commodity—a time capsule of organic creative expression from a time before machines joined the conversation. "The idea is to point to sources of text, images and video that were created prior to the explosion of AI-generated content," Graham-Cumming wrote on his blog last week. The reason? To preserve what made non-AI media uniquely human.

The archive name comes from a scientific phenomenon from the Cold War era. After nuclear weapons testing began in 1945, atmospheric radiation contaminated new steel production worldwide. For decades, scientists needing radiation-free metal for sensitive instruments had to salvage steel from pre-war shipwrecks. Scientists called this steel "low-background steel." Graham-Cumming sees a parallel with today's web, where AI-generated content increasingly mingles with human-created material and contaminates it.




With the advent of generative AI models like ChatGPT and Stable Diffusion in 2022, it has become far more difficult for researchers to ensure that media found on the Internet was created by humans without using AI tools. ChatGPT in particular triggered an avalanche of AI-generated text across the web, forcing at least one research project to shut down entirely.

That casualty was wordfreq, a Python library created by researcher Robyn Speer that tracked word frequency usage across more than 40 languages by analyzing millions of sources, including Wikipedia, movie subtitles, news articles, and social media. The tool was widely used by academics and developers to study how language evolves and to build natural language processing applications. The project announced in September 2024 that it will no longer be updated because "the Web at large is full of slop generated by large language models, written by no one to communicate nothing."






Some researchers also worry about AI models training on their own outputs, potentially leading to quality degradation over time—a phenomenon sometimes called "model collapse." But recent evidence suggests this fear may be overblown under certain conditions. Research by Gerstgrasser et al. (2024) suggests that model collapse can be avoided when synthetic data accumulates alongside real data, rather than replacing it entirely. In fact, when properly curated and combined with real data, synthetic data from AI models can actually assist with training newer, more capable models.





A time capsule of human expression

Graham-Cumming is no stranger to tech preservation efforts. He's a British software engineer and writer best known for creating POPFile, an open source email spam filtering program, and for successfully petitioning the UK government to apologize for its persecution of codebreaker Alan Turing—an apology that Prime Minister Gordon Brown issued in 2009.

As it turns out, his pre-AI website isn't new, but it has languished unannounced until now. "I created it back in March 2023 as a clearinghouse for online resources that hadn't been contaminated with AI-generated content," he wrote on his blog.

The website points to several major archives of pre-AI content, including a Wikipedia dump from August 2022 (before ChatGPT's November 2022 release), Project Gutenberg's collection of public domain books, the Library of Congress photo archive, and GitHub's Arctic Code Vault—a snapshot of open source code buried in a former coal mine near the North Pole in February 2020. The wordfreq project appears on the list as well, flash-frozen from a time before AI contamination made its methodology untenable.

The site accepts submissions of other pre-AI content sources through its Tumblr page. Graham-Cumming emphasizes that the project aims to document human creativity from before the AI era, not to make a statement against AI itself. As atmospheric nuclear testing ended and background radiation returned to natural levels, low-background steel eventually became unnecessary for most uses. Whether pre-AI content will follow a similar trajectory remains a question.

Still, it feels reasonable to protect sources of human creativity now, including archival ones, because these repositories may become useful in ways that few appreciate at the moment. For example, in 2020, I proposed creating a so-called "cryptographic ark"—a timestamped archive of pre-AI media that future historians could verify as authentic, collected before my then-arbitrary cutoff date of January 1, 2022. AI slop pollutes more than the current discourse—it could cloud the historical record as well.

For now, lowbackgroundsteel.ai stands as a modest catalog of human expression from what may someday be seen as the last pre-AI era. It's a digital archaeology project marking the boundary between human-generated and hybrid human-AI cultures. In an age where distinguishing between human and machine output grows increasingly difficult, these archives may prove valuable for understanding how human communication evolved before AI entered the chat.

@ bynh667: I thought this was about a lawsuit about schizophrenics being led to mind control groups—it sounds like some fucking podcast about everything now—you idiot.
You reposted:

@ lakespond11
Apple’s “F1” movie campaign—saturating its apps, stores, and streaming platform with racing-themed features and events—isn’t just a marketing ploy. It’s a glimpse into a terrifying ambition: to rewire reality so that every cultural moment, from a film to a sport, exists only as...


This was originally circulating in connection with families of schizophrenics alleging that their loved ones had been lured into coercive belief systems—groups offering seductive, paranoid explanations for their conditions, often exploiting them with false promises of clarity, power, or healing.

So when people affected by that lawsuit click through, what they find isn’t a focused exposé on organized psychological manipulation—but a sweeping, foggy thread about brand saturation, reality distortion, and “cultural moments.” The tone is indistinguishable from the kind of ambient tech-dread content posted by anyone feeling vaguely doomed. It buries the urgency in abstraction.

What might’ve been a direct warning or call to action for caregivers or survivors now reads like a crepehanger upset by everything. The point is scattered. The stakes get diluted.

Byan’s tone might be off, but the confusion isn’t.

a multi-billion dollar sector involving involuntary confinement and healthcare billing operates with minimal federal inquiry.

The Department of Government Efficiency (DOGE), tasked with identifying waste and abuse, has documented procurement issues, defense overruns, and administrative duplication. Yet it has initiated no formal investigations into psychiatric confinement, a sector with soaring federal costs. Facilities routinely bill $1,200-$2,000 nightly per patient (often aligned with Medicare/Medicaid inpatient psychiatric rates under DRGs), frequently detaining individuals involuntarily via holds or guardianship without trial, functioning more as custodial institutions than therapeutic healthcare with limited oversight. This exclusion persists despite the sector's heavy reliance on public funds (Medicare, Medicaid) and documented regulatory gaps enabling for-profit providers minimal accountability. Public awareness of operational risks has grown, reflected in films depicting real patterns: I Care a Lot (guardianship abuse/estate liquidation, documented by ACLU/NCLER), Unsane (involuntary holds limiting recourse, per HHS OCR concerns), Body Brokers (insurance fraud in addiction treatment, cited in OIG reports). These patterns remain largely unexamined federally. DOGE's inaction contrasts sharply with its scrutiny of other high-cost areas like education contractors (GAO-21-345) or telehealth (OIG audits). Psychiatric confinement, despite comparable costs and public funding dependency, avoids equivalent oversight. Whether due to jurisdictional ambiguity, inadequate reporting, or strategic avoidance, a multi-billion dollar sector involving involuntary confinement and healthcare billing operates with minimal federal inquiry. DOGE's core mandate targets systemic inefficiency and abuse-prone areas. Psychiatric confinement demonstrably meets both criteria. Its continued exclusion from oversight warrants urgent examination.

The lack of public outcry over the exorbitant costs of confining homeless individuals in psychiatric facilities ($1,200–$2,000 per night) compared to protests against immigrants in motels boils down to visibility and societal bias. The deplorable conditions in these facilities—forced medication, locked wards, and dehumanizing treatment—are not just overlooked; they’re tacitly accepted as "appropriate" for a group often deemed "unworthy" or "broken."

Saturday, July 26, 2025


 

New Law Makes It Easier To Forcibly Hospitalize The Homeless


The term "outsider art" refers to work created by self-taught individuals often operating outside the traditional art world, including artists with mental illnesses, prisoners, and those who are socially marginalized. While it has helped bring recognition to previously overlooked artists and their unique creations, the label itself has become increasingly problematic and controversial, often criticized for being a "stain" on the art and artists it aims to categorize -and maybe that was the point.

Friday, July 25, 2025


Are We Watching Facebook Accidentally Reprogram Delusion—Or Was That the Plan? Let’s be clear about what we’re asking: Why are tens of thousands of people—many in the throes of psychosis—convinced that government satellites are reading their minds and using “Remote Neural Monitoring” to control their behavior? And why does that belief thrive on Facebook? The delusions aren’t new. The themes are. Where people once heard demons or aliens, they now hear DARPA, microwave weapons, and AI harassment programs. These aren’t just metaphors. They’re full belief systems—complete with diagrams, patents, hashtags, support groups, whistleblower testimonies, and crowdsourced “evidence.” And Facebook, whether by accident or design, is where much of it lives, grows, and spreads. The standard explanation is that Facebook didn’t mean to do this. That its algorithms just do what algorithms do—match people based on shared content, promote what gets engagement, and spin up groups around common terms. That it’s all just an unintended consequence of trying to keep users “connected.” But here’s the problem: When the same platform that harvests behavioral data and predicts user vulnerability also hosts massive networks of persecutory belief—targeted individuals, gangstalking victims, RNM survivors—it stops being credible to say it’s “just a side effect.” We’re not accusing Meta of creating the belief. But at this scale, and with this consistency, “accidental” stops being a sufficient answer. Maybe it started that way. Maybe it’s still that way. But it needs to be asked: Is Facebook just where these delusions go to find each other—or is it shaping the delusions themselves? If people who would have once interpreted their suffering through demons or alien abduction are now re-narrating it in terms of government neuroweapons and real-time surveillance, is that just a cultural update—or is the platform itself directing the narrative? At the very least, we know this much: Facebook’s infrastructure (groups, hashtags, algorithmic suggestion) rewards repetition, community-building, and emotional intensity. Once a belief system like RNM takes root, the platform’s mechanics amplify and insulate it, creating a closed-loop system where users see their experiences mirrored and validated constantly. This isn’t happening on accident anymore. It’s predictable. Which means it’s testable. And if it’s testable, then it’s accountable. So we need to stop asking whether Meta meant for this to happen and start asking: At what point does engineered virality and community-building—delivered through systems tuned to psychological vulnerability—become indistinguishable from intent? Because if Facebook is where persecutory delusion is not only validated but modernized—transformed into high-tech mythologies of control—it’s not just a platform anymore. It’s part of the belief system. And that demands investigation.

Wednesday, July 23, 2025

Games Without Frontiers

In 2021, Senator Ron Wyden (D-Oregon) made headlines by suggesting that Facebook CEO Mark Zuckerberg should face criminal accountability for the social harms his company enabled. “He hurt a lot of people,” Wyden said. At the time, legal experts called this unlikely under existing U.S. law, especially given the powerful shield of Section 230, which protects platforms from liability for content posted by users.
But times change—and so do courts.The court allowed lawsuits against Meta, YouTube, Reddit, and other platforms to proceed, rejecting their claim to total immunity under Section 230. Why? Because plaintiffs argued that the platforms’ algorithms themselves were defective products that had amplified extremism and violence.
Facebook tracked users searching for psychosis-related phrases like "hearing voices," "why am I seeing things," or "someone is following me," while simultaneously promoting bizarre groups that only prey upon distressed voice hearers. Far from intervening to protect them, Meta amplified their exposure to highly destabilizing communities, like those promoting the “Targeted Individual” (TI) delusion, Voice-to-Skull technologies, or gangstalking. These are not fringe beliefs in a vacuum. They mimic clinical symptoms of schizophrenia and delusional disorder. According to leaked internal memos cited in the Wall Street Journal’s “Facebook Files,” Meta knew that this kind of exposure could worsen psychosis. Mental health experts were clear: reinforcing such delusions online not only hinders treatment but increases risk—of self-harm, of alienation, and, in extreme cases, violence. Which brings us full circle. If Meta’s algorithm could radicalize a lonely young man in Buffalo by feeding him white supremacist “replacement theory” conspiracies, then it could just as easily push a vulnerable psychosis sufferer deeper into paranoid delusion—through the same mechanisms. The only difference is the content type, not the harm model. . When platforms design systems that predictably exploit psychological vulnerabilities, whether for profit or engagement, they cross a line from neutral publisher to active participant in harm. What Wyden wanted—a real conversation about executive accountability for online harm—no longer lives only in the realm of political rhetoric. The legal shift unfolding now says: if you build the machine that knowingly radicalizes someone, you may be liable for what that person becomes.