Monday, July 28, 2025
Social media companies not liable for 2022 Buffalo mass shooting, New York court rules
July 25, 2025- Several social media companies should not be held liable for helping an avowed white supremacist who killed 10 Black people in 2022 at a Buffalo, New York grocery store, a divided New York state appeals court ruled on Friday.
Reversing a lower court ruling, the state Appellate Division in Rochester said defendants including Meta.
The case arose from Payton Gendron's racially motivated mass shooting at Tops Friendly Markets on May 14, 2022.
Relatives and representatives of victims, as well as store employees and customers who witnessed the attack, claimed the defendants' platforms were defective because they were designed to addict and radicalize users like Gendron.
Lawyers for the plaintiffs did not immediately respond to requests for comment.
Other defendants included Alphabet, Amazon.com (AMZN.O)
and Twitch, all of which Gendron used, the mid-level state appeals court said.
Writing for a 3-2 majority, Justice Stephen Lindley said holding social media companies liable would undermine the intent behind Section 230 of the federal Communications Decency Act, to promote development of and competition on the internet while keeping government interference to a minimum.
While condemning Gendron's conduct and "the vile content that motivated him to assassinate Black people simply because of the color of their skin," Lindley said a liability finding would "result in the end of the Internet as we know it."
"Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards," he wrote.
Justices Tracey Bannister and Henry Nowak dissented, saying the defendants force-fed targeted content to keep users engaged, be it videos about cooking or puppies, or white nationalist vitriol.
"Such conduct does not maintain the robust nature of Internet communication or preserve the vibrant and competitive free market that presently exists for the Internet contemplated by the protections of immunity," the judges wrote.
1. The Fact a Judge Considered Revisiting 230 Is Itself Telling
Even when claims are ultimately dismissed, the mere willingness of a court to entertain the possibility of liability undercuts the absolute immunity many assume Section 230 guarantees. It signals a shift:
The judiciary may be inching toward redefining what constitutes platform responsibility, especially when algorithmic amplification, community radicalization, or negligence in moderation is alleged.
That’s a warning shot to tech companies: the shield is not impenetrable forever.
2. TI Cases Are Fundamentally Different
What is a platform’s duty of care toward users experiencing delusional thinking, coercive control, or technologically induced psychosis?
If a platform algorithmically targets, amplifies, or gaslights those individuals for profit or engagement—does that not resemble exploitation of diminished capacity?
This is arguably more akin to elder abuse, psychiatric patient neglect, or wrongful institutionalization than to general user-platform interactions.
3. Buffalo Case as a False Precedent for Dismissing TI Harms
In the Buffalo case, the court ruled there was no proximate cause between social media and the shooter’s actions. But:
The shooter was a fully capacitated agent who used platforms to radicalize himself.
TIs, by contrast, are often the product of those platforms—shaped by feedback loops, targeted by manipulators (human or bot), or coerced into behaviors that may then be used as evidence of their “instability.”
Thus, the “no liability” precedent should not apply cleanly to TIs. The dynamic is more predatory, more intimate, and potentially medically negligent.
4. Analogies and Legal Leverage
Courts might be slow, but the analogies are there:
Pharmaceutical companies held liable for failing to warn vulnerable patients.
Nursing homes liable for not protecting those with dementia from harm.
Schools sued for failing to intervene when vulnerable students are bullied into self-harm.
So why are platforms, profiting off psychologically distressed users, immune?
Sunday, July 27, 2025
Pre-AI Content
Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content
Newly announced catalog collects pre-2022 sources untouched by ChatGPT and AI contamination.
BENJ EDWARDS – JUN 18, 2025 7:15 AM | 75
Former Cloudflare executive John Graham-Cumming recently announced that he launched a website, lowbackgroundsteel.ai, that treats pre-AI, human-created content like a precious commodity—a time capsule of organic creative expression from a time before machines joined the conversation. "The idea is to point to sources of text, images and video that were created prior to the explosion of AI-generated content," Graham-Cumming wrote on his blog last week. The reason? To preserve what made non-AI media uniquely human.
The archive name comes from a scientific phenomenon from the Cold War era. After nuclear weapons testing began in 1945, atmospheric radiation contaminated new steel production worldwide. For decades, scientists needing radiation-free metal for sensitive instruments had to salvage steel from pre-war shipwrecks. Scientists called this steel "low-background steel." Graham-Cumming sees a parallel with today's web, where AI-generated content increasingly mingles with human-created material and contaminates it.
With the advent of generative AI models like ChatGPT and Stable Diffusion in 2022, it has become far more difficult for researchers to ensure that media found on the Internet was created by humans without using AI tools. ChatGPT in particular triggered an avalanche of AI-generated text across the web, forcing at least one research project to shut down entirely.
That casualty was wordfreq, a Python library created by researcher Robyn Speer that tracked word frequency usage across more than 40 languages by analyzing millions of sources, including Wikipedia, movie subtitles, news articles, and social media. The tool was widely used by academics and developers to study how language evolves and to build natural language processing applications. The project announced in September 2024 that it will no longer be updated because "the Web at large is full of slop generated by large language models, written by no one to communicate nothing."
Some researchers also worry about AI models training on their own outputs, potentially leading to quality degradation over time—a phenomenon sometimes called "model collapse." But recent evidence suggests this fear may be overblown under certain conditions. Research by Gerstgrasser et al. (2024) suggests that model collapse can be avoided when synthetic data accumulates alongside real data, rather than replacing it entirely. In fact, when properly curated and combined with real data, synthetic data from AI models can actually assist with training newer, more capable models.
A time capsule of human expression
Graham-Cumming is no stranger to tech preservation efforts. He's a British software engineer and writer best known for creating POPFile, an open source email spam filtering program, and for successfully petitioning the UK government to apologize for its persecution of codebreaker Alan Turing—an apology that Prime Minister Gordon Brown issued in 2009.
As it turns out, his pre-AI website isn't new, but it has languished unannounced until now. "I created it back in March 2023 as a clearinghouse for online resources that hadn't been contaminated with AI-generated content," he wrote on his blog.
The website points to several major archives of pre-AI content, including a Wikipedia dump from August 2022 (before ChatGPT's November 2022 release), Project Gutenberg's collection of public domain books, the Library of Congress photo archive, and GitHub's Arctic Code Vault—a snapshot of open source code buried in a former coal mine near the North Pole in February 2020. The wordfreq project appears on the list as well, flash-frozen from a time before AI contamination made its methodology untenable.
The site accepts submissions of other pre-AI content sources through its Tumblr page. Graham-Cumming emphasizes that the project aims to document human creativity from before the AI era, not to make a statement against AI itself. As atmospheric nuclear testing ended and background radiation returned to natural levels, low-background steel eventually became unnecessary for most uses. Whether pre-AI content will follow a similar trajectory remains a question.
Still, it feels reasonable to protect sources of human creativity now, including archival ones, because these repositories may become useful in ways that few appreciate at the moment. For example, in 2020, I proposed creating a so-called "cryptographic ark"—a timestamped archive of pre-AI media that future historians could verify as authentic, collected before my then-arbitrary cutoff date of January 1, 2022. AI slop pollutes more than the current discourse—it could cloud the historical record as well.
For now, lowbackgroundsteel.ai stands as a modest catalog of human expression from what may someday be seen as the last pre-AI era. It's a digital archaeology project marking the boundary between human-generated and hybrid human-AI cultures. In an age where distinguishing between human and machine output grows increasingly difficult, these archives may prove valuable for understanding how human communication evolved before AI entered the chat.
@ bynh667: I thought this was about a lawsuit about schizophrenics being led to mind control groups—it sounds like some fucking podcast about everything now—you idiot.
You reposted:
@ lakespond11
Apple’s “F1” movie campaign—saturating its apps, stores, and streaming platform with racing-themed features and events—isn’t just a marketing ploy. It’s a glimpse into a terrifying ambition: to rewire reality so that every cultural moment, from a film to a sport, exists only as...
This was originally circulating in connection with families of schizophrenics alleging that their loved ones had been lured into coercive belief systems—groups offering seductive, paranoid explanations for their conditions, often exploiting them with false promises of clarity, power, or healing.
So when people affected by that lawsuit click through, what they find isn’t a focused exposé on organized psychological manipulation—but a sweeping, foggy thread about brand saturation, reality distortion, and “cultural moments.” The tone is indistinguishable from the kind of ambient tech-dread content posted by anyone feeling vaguely doomed. It buries the urgency in abstraction.
What might’ve been a direct warning or call to action for caregivers or survivors now reads like a crepehanger upset by everything. The point is scattered. The stakes get diluted.
Byan’s tone might be off, but the confusion isn’t.
a multi-billion dollar sector involving involuntary confinement and healthcare billing operates with minimal federal inquiry.
The Department of Government Efficiency (DOGE), tasked with identifying waste and abuse, has documented procurement issues, defense overruns, and administrative duplication. Yet it has initiated no formal investigations into psychiatric confinement, a sector with soaring federal costs. Facilities routinely bill $1,200-$2,000 nightly per patient (often aligned with Medicare/Medicaid inpatient psychiatric rates under DRGs), frequently detaining individuals involuntarily via holds or guardianship without trial, functioning more as custodial institutions than therapeutic healthcare with limited oversight. This exclusion persists despite the sector's heavy reliance on public funds (Medicare, Medicaid) and documented regulatory gaps enabling for-profit providers minimal accountability. Public awareness of operational risks has grown, reflected in films depicting real patterns: I Care a Lot (guardianship abuse/estate liquidation, documented by ACLU/NCLER), Unsane (involuntary holds limiting recourse, per HHS OCR concerns), Body Brokers (insurance fraud in addiction treatment, cited in OIG reports). These patterns remain largely unexamined federally. DOGE's inaction contrasts sharply with its scrutiny of other high-cost areas like education contractors (GAO-21-345) or telehealth (OIG audits). Psychiatric confinement, despite comparable costs and public funding dependency, avoids equivalent oversight. Whether due to jurisdictional ambiguity, inadequate reporting, or strategic avoidance, a multi-billion dollar sector involving involuntary confinement and healthcare billing operates with minimal federal inquiry. DOGE's core mandate targets systemic inefficiency and abuse-prone areas. Psychiatric confinement demonstrably meets both criteria. Its continued exclusion from oversight warrants urgent examination.
The lack of public outcry over the exorbitant costs of confining homeless individuals in psychiatric facilities ($1,200–$2,000 per night) compared to protests against immigrants in motels boils down to visibility and societal bias. The deplorable conditions in these facilities—forced medication, locked wards, and dehumanizing treatment—are not just overlooked; they’re tacitly accepted as "appropriate" for a group often deemed "unworthy" or "broken."
Saturday, July 26, 2025
New Law Makes It Easier To Forcibly Hospitalize The Homeless
The term "outsider art" refers to work created by self-taught individuals often operating outside the traditional art world, including artists with mental illnesses, prisoners, and those who are socially marginalized. While it has helped bring recognition to previously overlooked artists and their unique creations, the label itself has become increasingly problematic and controversial, often criticized for being a "stain" on the art and artists it aims to categorize -and maybe that was the point.



