plant lover, cookie monster, shoe fiend
20525 stories
·
19 followers

Pluralistic: Understaffing as a form of enshittification (23 Mar 2026)

1 Comment and 4 Shares


Today's links

  • Understaffing as a form of enshittification: A way to shift value from workers, patients and shoppers to investors.
  • Hey look at this: Delights to delectate.
  • Object permanence: Marvel v "superhero"; What's a photocopier?; "Up Against It"; "Medusa's Web"; AI can't do your job; Coping with plenty; "The Shakedown"; Chickenized reverse-centaurs; France v iTunes; Copyfight discipline; Mystery lobbyists; "Where the Axe is Buried"; Free/open microprocessor; Folk models of computer security; Bug-eyed steampunk mask; Academics embracing Wikipedia.
  • Upcoming appearances: Berkeley, Montreal, London, Berlin, Hay-on-Wye.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A 1950's pharmacy with a labcoated pharmacist behind the counter. The pharmacist's head has been replaced with the poop emoji from the cover of 'Enshittification,' its mouth covered with a black bar scrawled with grawlix. The pharmacy has been made over to look haunted, with purple mist rising from the ground and cobwebs in the top corners. A CVS Pharmacy sign hangs in the background.

Understaffing as a form of enshittification (permalink)

At root, enshittification can only take place when companies can move value around. Digital tools make it easier than ever to do this, for example, by changing prices on a per-user, per-session basis, using commercial surveillance data to predict the highest price or lowest wage a user will accept:

https://pluralistic.net/2023/02/19/twiddler/

Digital "twiddling" represents a powerful system of pumps for moving value around, taking it away from users and giving it to business customers, then taking it from businesses and giving it to users, and then, ultimately, harvesting all the value for the company's shareholders and executives.

Twiddling is powerful because it's fine-grained, allowing businesses to extract more from their most vulnerable customers and workers, while reserving more equitable treatment for more empowered stakeholders who might otherwise take their business elsewhere.

But long before digitization made twiddling possible, businesses that found themselves in a position to make things worse for their customers and workers without facing consequences were accustomed to doing so. Think of the airport shop that sells water for $10/bottle: that's a ripoff whether you're in coach-minus or flying first class, and it's made possible by the TSA checkpoint that makes shopping elsewhere a time-consuming impossibility.

The airport shop is the only game in town – a "monopolist" in economics jargon. When a business has something you really want (or even better, something you need) and it's hard (or impossible) for you to get it elsewhere, they can take value away from you and harvest it for themselves.

The most obvious forms of monopoly extraction are high prices and low wages. Dollar stores are notorious for this, using their market power to procure extremely small packages of common goods in "cheater sizes" that have high per-unit costs (e.g. the cost per ounce for soap), while still having a low price tag (the cost per (small) bottle of soap). These stores are situated in food deserts, which they create by boxing in community grocers and heavily discounting their wares until the real grocers go out of business. They're also situated in work deserts, because driving regular grocers out of business destroys the competition for labor, too. That means they can pay low wages and charge high prices and make a hell of a lot of money, which is why there are so many fucking dollar stores:

https://pluralistic.net/2023/03/27/walmarts-jackals/#cheater-sizes

That's the most obvious form of value harvesting, but it's not the only one. There are other costs that businesses can impose on their customers and workers. Think of CVS, the pharmacy monopolist that uses its vertical integration with bizarre, poorly understood middlemen like "pharmacy benefit managers" to drive independent pharmacies out of business:

https://pluralistic.net/2024/09/23/shield-of-boringness/#some-men-rob-you-with-a-fountain-pen

If you've been to a CVS store recently, you have doubtless experienced a powerful form of value-shifting: understaffing. CVS (and the other massive chains in the cartel, like Walgreens) have giant stores with just one or two employees on the floor, often just a cashier and a pharmacist.

This makes them easy pickings for shoplifters, so all their merchandise is locked up in cabinets and when you want to buy something, you have to find the lone employee and get them to unlock the case for you. This is CVS trading your time for their wage-bill.

Then, you're expected to check out your own purchases – shifting labor from workers on CVS's payroll to you – with badly maintained machines that often misfire and require you to wait again for that lone employee to come and override them.

Meanwhile, that employee is absorbing a gigantic amount of frustration and abuse from customers who are paying high prices and enduring long waits – another cost that CVS shifts from their shareholders to someone else (workers, in this case).

Finally, CVS demands that publicly funded police respond to the inevitable shoplifting and other security problems created by running a big-box store with a skeleton crew, shifting costs from the business to everyone in the local tax-base.

In "Not Enough Workers For the Job," The American Prospect's Robin Kaiser-Schatzlein looks at the systemic trend towards understaffing that has swept across every sector of the US economy over the past five years:

https://prospect.org/2026/03/19/understaff-workplace-business-covid-cvs-pharmacies-hotels-grocery-stores/

Kaiser-Schatzlein lays the blame for many of life's frustrations at the feet of this business trend: "long lines, messy grocery aisles, organized theft, high hotel costs, frequent flight cancellations, deadly medication errors at pharmacies, increased use of medical restraints in nursing homes, and, more generally, a palpable and rising dissatisfaction with work."

As you can see from that list, understaffing affects everyone, from people with the wherewithal to buy a plane ticket to vulnerable elderly people who are literally tied to their beds or drugged into stupors for the last years of their lives.

There's academic work to support the idea that understaffing is on the rise, like a 2024 Kennedy School survey of 14,000 workers where a majority said that their workplaces are "always" or "often" understaffed. A 2023 study in the Journal of Public Health Management and Practice found that public health institutions need to hire 80% more workers to be adequately staffed. New York's Mt Sinai hospitals paid a $2m fine in 2024 for understaffing its ERs, as well as oncology and labor units. Another study blames understaffing for the rise of use of antipsychotic "chemical handcuffs" in nursing homes:

https://pubmed.ncbi.nlm.nih.gov/35926573/

The hits keep coming: the DoT Inspector General says that 77% of air traffic control is understaffed, with NYC ATC staffed at 54% of the correct level. In Texas, county jails have had to reduce their capacity due to understaffing (they have enough beds, but not enough turnkeys). Understaffing is behind much of the unprecedented union surge, with workers at Starbucks, railroads and elsewhere becoming labor militants due to understaffing. 83% of white-collar millennials say they're doing extra work to make up for vacant positions in their organizations. As Starbucks union organizers can attest, workers need unions if they want to have a hope of forcing their bosses to adequately staff their jobsites, so it's not surprising that understaffing has emerged at a time when union density is at rock bottom.

Kaiser-Schatzlein quotes the Kennedy School's Daniel Schneider, who identifies understaffing as a deliberate business strategy. Businesses don't hire enough workers because that makes them more profitable. It's not because "no one wants to work anymore" (though doubtless repeating that fairy tale helps shift the blame for long lines and poor service from real, greedy bosses to imaginary, greedy workers).

Private equity firms lead the charge here, "rolling up" multiple, competing businesses in a sector and then cutting staffing across all of them. Putting all the businesses in a given sector and region under common ownership means that when these businesses hack away at staffing levels, workers and customers have nowhere else to go. This is especially pernicious at nursing homes, where PE companies drastically reduce headcount, putting staff and patients alike at risk:

https://www.npr.org/sections/health-shots/2023/01/31/1139783599/new-york-nursing-home-owners-drained-cash?ft=nprml&f=853198417

Private equity has just about declared victory in its decades-long war on community pharmacies, consolidating pharmacy ownership nationwide into just a few chains that are the poster-children for understaffing. These ghost-ships aren't just frustrating places to shop – they're a danger to their communities. As Kaiser-Schatzlein reports, Ohio fined CVS in 2021 for boarding up the walk-up pharmacies in its stores and forcing customers to use the drive-through, because there was only a single pharmacist on duty.

Without help, the lone pharmacist was unable to process deliveries, so CVS pharmacies' floors were littered with unopened parcels. Patients had to wait over a month to get their prescriptions filled. CVS refused to hire additional staff to process the backlog, and the on-duty staff worked under declining conditions, as the undermaintained air conditioning quit and indoor temperatures soared. Unsurprisingly, these stores had massive staff turnover, which also hampered their efficiency.

Understaffing in pharmacies leads to serious medication errors, which are proliferating across the US, killing hundreds of thousands of Americans every year. The errors are incredible, like the woman who died after getting chemotherapy drugs instead of antidepressants:

https://www.nytimes.com/2020/01/31/health/pharmacists-medication-errors.html

Pharmacists at chain stores like CVS are at elevated risk for kidney stones because they don't have time for bathroom breaks, so they adopt a practice of not drinking water during their shifts. One CVS pharmacist told Texas regulators, "I am a danger to the public working for CVS."

As ever, covid provides the ideal excuse for shifting value from customers and workers to shareholders. Today's high prices never came down after the "greedflation" that bosses boasted about to shareholders, even as they told customers that it was because of "supply chain shocks":

https://pluralistic.net/2023/03/11/price-over-volume/#pepsi-pricing-power

Likewise, staffing levels never came back from the covid skeleton crews that we all learned to deal with in the days of widespread acute illness and social distancing. Kaiser-Schatzlein spoke to hotel workers like Jianci Liang, a housekeeper at Boston's Hilton Park Plaza, who described a post-pandemic jobsite with 20 fewer housekeepers: "I sleep with pain, I wake up with pain, I go to work with pain." The Bureau of Labor says that hotel staffing levels are down 16% nationwide.

Prices (and profits) are up, though. Hotels are posting record profits and paying record executive salaries, wrung from facilities where the pools are closed and room cleanings happen on alternate days.

Workers absorb the cost of understaffing in their bodies and their psyches. It's not just physical exhaustion, it's also the abuse that is directly correlated with lower staffing levels. Frustrated customers vent their anger at grocery workers, flight attendants and other front-line workers.

I can't help but see a connection here to the AI bubble, which is fueled by the fantasy of a world without people:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

The billionaire solipsists who have directed hundreds of billions of dollars in AI investment like to rhapsodize about a future where a boss's ideas are turned into products and services without having to be funneled through workers:

https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism

That's why AI has taken over customer service – the multi-hour waits for a customer service rep were always a way of shifting value from customers and workers to shareholders. Businesses could increase staffing at their call centers. Businesses could offer better products and services and reduce the number of people who need customer service. By refusing to do either, they make you wait on the line until you are suffused with murderous rage, and then expect their workers to deal with your anger. Turning the whole thing over to AI makes perfect sense – your problems won't be solved, and they don't have to pay the chatbot at all when you get angry at it:

https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice

"We did this with AI" has become a synonym for "We don't care if this is done well":

https://pluralistic.net/2026/03/11/modal-dialog-a-palooza/#autoplay-videos

"We don't care if this is done well" could well be the motto of the understaffing craze. The technical insights that sparked today's AI investment bubble could have happened at any time, but the ensuing investment tsunami is a product of a world dominated by large firms that are "too big to care" about the quality of their products – or their jobs.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Marvel Comics: stealing our language https://memex.craphound.com/2006/03/18/marvel-comics-stealing-our-language/

#20yrsago MPAA/RIAA/BSA: No breaking DRM, even if it’s killing you (literally!) https://blog.citp.princeton.edu/2006/03/08/riaa-says-future-drm-might-threaten-critical-infrastructure-and-potentially-endanger-liv/

#20yrsago Coping with plenty – stuff gets cheaper, space gets pricier https://www.theguardian.com/business/2006/feb/28/retail.shopping

#20yrsago France will let Microsoft play iTunes http://news.bbc.co.uk/2/hi/technology/4828296.stm

#20yrsago A new discipline to describe the copyfight https://web.archive.org/web/20060422010702/https://www.nyu.edu/classes/siva/archives/002930.html

#20yrsago Right-wing think-tank hates DRM https://www.cato.org/policy-analysis/circumventing-competition-perverse-consequences-digital-millennium-copyright-act#

#20yrsago Reasons to take math in high school https://web.archive.org/web/20060610134055/http://www.acm.org/ubiquity/views/v7i11_math.html

#20yrsago Sun ships free and open microprocessor https://web.archive.org/web/20060221112756/http://opensparc.sunsource.net/nonav/index.html

#20yrsago Octavia Butler scholarship will send people of color to Clarion https://web.archive.org/web/20060406161412/https://carlbrandon.org/butlerscholarship/

#20yrsago Online sexual material is obscene if any community in US objects https://web.archive.org/web/20060505232346/http://www.justicemag.com/daily/item/2590.html

#15yrsago Folk models of home computer security: what we think our PCs are doing https://rickwash.com/papers/rwash-homesec-soups10-final.pdf

#15yrsago Fixers’ Collective: people learning to make broken stuff work again https://www.csmonitor.com/The-Culture/Arts/2011/0321/The-art-of-the-fix-it

#15yrsago Bug-eyed monster steampunk mask https://bob-basset.livejournal.com/158400.html

#15yrsago Scholars to stop pretending they don’t use Wikipedia; will work out best practices instead https://www.bbc.com/news/education-12809944

#15yrsago Electronic publishing Bingo card from John Scalzi https://whatever.scalzi.com/2011/03/20/the-electronic-publishing-bingo-card/

#15yrsago RIP, Mike Glicksohn, Hugo-winning science fiction fan https://file770.com/mike-glicksohn-1946-2011/

#15yrsago Anti-labor ads celebrate workers taking paycuts and CEOs getting millions https://www.cogdis.me/2011/03/is-this-what-they-really-want.html

#15yrsago Reluctant witness refuses to admit he knows what a photocopier is https://www.cleveland.com/metro/2011/03/identifying_photocopy_machine.html

#15yrsago Tim Wu in the Guardian https://www.theguardian.com/technology/2011/mar/17/the-master-switch-tim-wu-internet

#15yrsago Up Against It: smart, whiz-bang space opera pits astro-bureaucrats against rogue AIs https://memex.craphound.com/2011/03/18/up-against-it-smart-whiz-bang-space-opera-pits-astro-bureaucrats-against-rogue-ais/

#10yrsago Howto: start a fire with a lemon https://www.youtube.com/watch?v=Bv2vT665bGI

#10yrsago First order of business for hard-right government: canceling Croatia’s answer to The Daily Show https://balkaninsight.com/2016/03/17/satiric-show-pulled-from-croatian-tv-for-intolerance-03-17-2016/bi/all-balkan-countries/

#10yrsago FBI issues car-hacking warning, tells drivers to keep their cars’ patch-levels current https://www.wired.com/2016/03/fbi-warns-car-hacking-real-risk/

#10yrsago BART’s twitter manager drops truth-bombs, world cheers https://gizmodo.com/i-would-like-to-buy-a-drink-for-the-poor-soul-who-ran-t-1765477706

#10yrsago Chelsea Manning gets the US Army to cough up its “insider threat” training docs https://www.theguardian.com/commentisfree/2016/mar/18/government-persecuting-whistleblowers-insider-threat-chelsea-manning

#10yrsago Apple engineers quietly discuss refusing to create the FBI’s backdoor https://www.nytimes.com/2016/03/18/technology/apple-encryption-engineers-if-ordered-to-unlock-iphone-might-resist.html

#10yrsago Russia moots ban on discussions about VPNs, reverse proxies, and other anti-censorship techniques https://torrentfreak.com/copyright-holders-want-site-block-circumvention-advice-banned-160319/

#10yrsago Medusa’s Web: Tim Powers is the Philip K Dick of our age https://memex.craphound.com/2016/03/18/medusas-web-tim-powers-is-the-philip-k-dick-of-our-age/

#10yrsago Meet the Commercial Energy Working Group, a lobby group that won’t say who it lobbies for https://web.archive.org/web/20160320150011/https://theintercept.com/2016/03/20/mysterious-powerful-lobbying-group-wont-even-say-who-its-lobbying-for/

#5yrsago Support Amazon workers today https://pluralistic.net/2021/03/20/against-amazon-union-busting/#what-rhymes-with-bezos

#5yrsago Department of Truth https://pluralistic.net/2021/03/20/against-amazon-union-busting/#dot

#5yrsago The political possibility of cities https://pluralistic.net/2021/03/21/ex-urbe/#arcology-politics

#5yrsago Aviation bailout cost $666k/job https://pluralistic.net/2021/03/18/news-worthy/#aa

#5yrsago Impunity for NYPD cops who brutalized BLM protesters https://pluralistic.net/2021/03/18/news-worthy/#nypd-black-and-blue

#5yrsago Help news, not news-barons https://pluralistic.net/2021/03/18/news-worthy/#big-news

#5yrsago Announcing "The Shakedown" https://pluralistic.net/2021/03/19/the-shakedown/#monopsony

#5yrsago Chickenized reverse-centaurs https://pluralistic.net/2021/03/19/the-shakedown/#weird-flex

#1yrago You can't save an institution by betraying its mission https://pluralistic.net/2025/03/19/selling-out/#destroy-the-village-to-save-it

#1yrago AI can't do your job https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete

#1yrago Ray Nayler's "Where the Axe Is Buried" https://pluralistic.net/2025/03/20/birchpunk/#cyberspace-is-everting


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1034 words today, 54661 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
sarcozona
2 minutes ago
reply
Epiphyte City
mkalus
4 days ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
LinuxGeek
6 days ago
reply
This is a long read, but worth it. His comments about the role of AI in under-staffing struck a chord with me. One of the many reasons that I finally left Comcast was because I couldn't dispute my bill with a live person.

All of DOGE’s work could be undone as lawsuit against Musk proceeds - Ars Technica

2 Shares

Elon Musk must defend himself against a lawsuit alleging that he unlawfully seized too much power as the leader of the Department of Government Efficiency (DOGE), a judge ruled Monday.

According to the plaintiffs, Musk needed Senate confirmation before directing DOGE on drastic actions like eliminating agencies, mass firings, and steep budget cuts. Allegedly going far beyond the authority granted in President Donald Trump’s most expansive DOGE executive orders, Musk took every inch of power granted and then increasingly used it to overreach unlike any presidential advisor who came before, the suit says.

In her opinion partly denying a motion to dismiss, US District Judge Tanya S. Chutkan did not buy the US government’s defense that Musk held no office formally established by law—and therefore did not need Senate confirmation and cannot be alleged to have exceeded his authority under the Constitution’s Appointments Clause.

“Nobody thinks, for instance, that the White House Chief of Staff or White House Counsel are officers in any fashion, despite the fact they may exercise tremendous influence across the government,” the government’s motion to dismiss said.

Chutkan called the defense “disquieting.”

“Defendants appear to make the extraordinary argument that an individual who holds an important office and wields immense power is not subject to the Appointments Clause so long as the office was unlawfully created, and the power was unlawfully seized,” Chutkan said.

“Under that interpretation, the President could evade Appointments Clause scrutiny by (1) usurping Congress’s power to create a principal office and assign it powers, and (2) unilaterally appointing an official to that office without Senate confirmation,” the judge continued. “The court will not countenance such a two-fold attack on Congress’s role in our system of checks and balances,” she wrote, noting that “if the President unilaterally creates a principal office, endows it with unlawful powers, and fills it without Senate confirmation, that is more—not less—reason for Appointments Clause scrutiny.”

Chutkan also declined to view Musk’s influence as akin to that of Trump’s cabinet members, writing that “the alleged powers of the head of DOGE are clearly weighty and important.”

For now, plaintiffs have shown enough to allege that as DOGE’s head, Musk exercised “almost ‘unchecked’ discretion” and received “minimal supervision.” Reporting only to Trump, it seemed plausible that Musk could take any step he wanted, knowing he would get the “rubber stamp” of the president.

At this stage, the judge emphasized that “plaintiffs have adequately pled that the head of DOGE is an officer of the United States” and that the position still unlawfully exists in government. And while DOGE is scheduled for termination on July 4, 2026, “there is no termination date for the overarching DOGE entity or its leader, suggesting permanence,” plaintiffs had noted.

The lawsuit was raised by nonprofits that were allegedly harmed by DOGE’s broad government cuts. Their case was later consolidated with a similar lawsuit brought by a coalition of states led by New Mexico. It was filed in February 2025, before Musk left DOGE in May, and plaintiffs alleged that their claims about Musk’s unchecked power also apply to his successors.

In a loss, every harmful move that DOGE made could possibly be undone.

“If Plaintiffs prevail on their claim that Musk was not constitutionally appointed and therefore lacked authority to exercise the power of a principal officer, the court could vacate Musk-initiated policies or cuts that are causing Plaintiffs ongoing harm,” Chutkan wrote.

Musk may regret X posts bragging about DOGE

Unsurprisingly, Musk’s posts on X were cited in the lawsuit as part of “ultra vires” claims that the executive branch made an “extreme legal error” in allowing him to assume an outsize government role.

On X, Musk made several posts suggesting that DOGE was operating at his direction, plaintiffs alleged in their complaint. That included posts like “USAID is a criminal organization. Time for it to die.” Additionally, Musk posted, “What is this ‘Department of Education’ you keep talking about? I just checked and it doesn’t exist,” prior to Trump confirming he told Musk to look into the department. And perhaps most damning to the government’s defense, in reference to the Consumer Financial Protection Bureau—which senators claimed Musk long wanted to kill off—Musk posted, “CFPB RIP.”

The Office of Personnel Management also seemed to borrow Musk’s exact template for a “Fork in the Road” email giving government employees an option to resign and accept buyouts. Musk sent a similar email after taking over Twitter as he quickly moved to reduce staff.

Again, the government tried to claim that Musk couldn’t be doing anything wrong in the role because he wasn’t violating any specific statute in this unique role.

The judge rejected that logic. Pointing to Musk’s X posts and other public statements from the White House and DOGE officials, Chutkan said that plaintiffs had shown enough evidence that “defendants are exercising immense power without any grant of statutory authority whatsoever. That is the sort of ‘extreme legal error’ that can sustain a claim for ultra vires review.”

As the lawsuit proceeds, plaintiffs will try to prove what many critics claimed was obvious after Trump appointed Musk as a special government employee: that Musk was acting as president.

Perhaps notably, Musk claimed on X—after his breakup with Trump—that Trump would have never won the election without his support.

Although Musk never claimed to be acting as president, he deemed himself “first buddy” in another X post, which lawmakers cited as a sign that Musk was acting as “co-president.” Among the critics was Senator Bernie Sanders, who wrote on X that Musk seemed to be pulling the strings and became $200 billion richer after Trump got elected.

“Are Republicans beholden to the American people?” Sanders said. “Or President Musk? This is oligarchy at work.”

For plaintiffs, the lawsuit is not about Musk as an individual, though. They allege that without an injunction preventing further government cuts and an order undoing DOGE’s worst work, Americans will continue to suffer from Musk’s unprecedented power grab even as his successors maintain DOGE’s mission.

Elon Musk “has roamed through the federal government unraveling agencies, accessing sensitive data, and causing mass chaos and confusion for state and local governments, federal employees, and the American people,” plaintiffs argued. “Oblivious to the threat this poses to the nation, President Trump has delegated virtually unchecked authority to Mr. Musk without proper legal authorization from Congress and without meaningful supervision of his activities. As a result, he has transformed a minor position that was formerly responsible for managing government websites into a designated agent of chaos without limitation.”

Read the whole story
sarcozona
5 minutes ago
reply
Epiphyte City
acdha
4 days ago
reply
Washington, DC
Share this story
Delete

When Intelligence Fails: A Legal Targeting Analysis of the Minab School Strike

1 Share

Introduction

On the morning of Feb. 28, 2026, as the first wave of U.S.-Israeli airstrikes swept across southern Iran, a Tomahawk cruise missile struck the Shajarah Tayyebeh girls’ elementary school in Minab, Hormozgan Province. Three missiles struck the compound in rapid succession. At the time of impact, between 170 and 264 students were present — most of them girls between the ages of 7 and 12. At least 165 schoolchildren, teachers, and parents were killed. It appears to be the deadliest single incident involving civilian casualties so far in the conflict.

A preliminary U.S. military inquiry has since concluded that American forces were likely responsible and that the strike resulted from a targeting error rooted in stale intelligence data. The Defense Intelligence Agency had labeled the school building as a legitimate military target — a classification derived from when the site was originally part of an adjacent Islamic Revolutionary Guard Corps (IRGC) naval base. Satellite imagery later confirmed that the school had been physically separated from that base between 2013 and 2016: fenced off, repainted in bright colors, equipped with playgrounds, and converted entirely to civilian educational use.

The strike has triggered condemnation from the United Nations, bipartisan concern in Congress, and an ongoing Pentagon investigation. It has also placed squarely before the public a question that military lawyers and targeting officers confront in every armed conflict: How does the law governing the conduct of hostilities protect civilians when the intelligence supporting a strike is inaccurate, and what legal consequences flow from getting it wrong?

Part I: The Facts as Currently Known

The Target and Its History

The Shajarah Tayyebeh school sits on a block in Minab that also contains buildings used by the IRGC Navy — a lawful military target under the law of armed conflict. The key factual dispute concerns the school building itself. According to reporting by The New York Times, confirmed by satellite imagery analysis, the building that housed the school was originally part of the IRGC base compound. At some point between 2013 and 2016, it was physically separated from the base: a fence was erected, watchtowers were removed, three public entrances were opened, a sports field was painted on the asphalt, and the walls were decorated in blue and pink—strong visual markers of a school. The school also had a “years-long online presence.”

By February 2026, the building’s civilian function was visible in open-source satellite imagery. Yet the target coding provided by the Defense Intelligence Agency to U.S. Central Command (CENTCOM) still labeled the building as part of the military installation. Officers at CENTCOM reportedly created targeting coordinates for the strike using that outdated DIA data without verifying its currency against current imagery or intelligence. The result was a target package that included what was, in fact, a functioning primary school.

The Strike

The school was hit during the morning school session, between approximately 10:00 and 10:45 a.m. local time. The compound was struck three times. After the first impact, the school principal reportedly moved students to an interior prayer room and called parents to come collect their children. The second strike hit that room directly, and the third strike likely hit close by.

The weapon system used was the BGM-109 Tomahawk Land Attack Missile (TLAM), a U.S. Navy cruise missile fired from surface ships. Video footage geolocated by the investigative collective Bellingcat showed a Tomahawk striking the adjacent IRGC naval facility on the same date. Missile fragment imagery reviewed by munitions experts for NBC News and CNN was consistent with Tomahawk components. The United States is the only country in the current conflict known to operate Tomahawks.

The Preliminary Investigation

Multiple news outlets have reported that a preliminary U.S. military inquiry has found American forces likely responsible for the school strike. The core finding is that the strike constituted a targeting error attributable to outdated DIA target coding. The investigation is reportedly examining whether the error originated in human analytical failure, an AI-assisted geospatial targeting system (although officials have said this is unlikely), or some combination of the two. The investigation has not yet been formally completed or officially published.

Part II: The Legal Framework

Understanding whether this strike was lawful—or unlawful—or criminal requires working through three layers of legal analysis: (1) the foundational principles of international humanitarian law (IHL) that govern all targeting; (2) the specific procedural obligations those principles generate; and (3) the standard for individual criminal responsibility when something goes wrong.

The Four Core Targeting Principles

IHL mandates four core targeting principles: (1) military necessity; (2) distinction; (3) proportionality; and (4) precautionary measures.

Military Necessity

Military necessity permits force only against objects that make an effective contribution to military action and whose destruction offers a definite military advantage. The IRGC naval base in Minab qualified. The school building did not. Once converted to civilian use and physically separated from the base, it lost its status as a permissible target. The DIA’s contrary classification was wrong as a matter of fact, and a factually wrong classification cannot satisfy military necessity regardless of the confidence with which it was held.

Distinction

The principle of distinction is the cornerstone of IHL. It requires parties to an armed conflict to always distinguish between civilians and combatants and between civilian objects and military objectives. Attacks may only be directed against combatants and military objectives. Schools are classified as presumed civilian objects (see also paragraph 5.4.3.2 of the DOD Law of War Manual, for instance). Children are specially protected persons under IHL.

Article 52(2) of Additional Protocol I to the Geneva Conventions defines “military objectives” as those which by their nature, location, purpose, or use make an effective contribution to military action and whose total or partial destruction, capture, or neutralization, in the circumstances ruling at the time, offers a definite military advantage. The phrase “in the circumstances ruling at the time” is critical: it requires a contemporaneous assessment of the object’s status. While the United States is not a party to Additional Protocol I, this provision is widely recognized as reflecting customary international law.

Article 52(3) provides a presumption in favor of civilian status: in cases of doubt as to whether an object normally dedicated to civilian purposes is being used to make an effective contribution to military action, it shall be presumed not to be used in that way. A functioning elementary school in session is not a case of ambiguity. It falls squarely within the civilian presumption.

Proportionality

Proportionality prohibits attacks expected to cause excessive incidental civilian casualties relative to the anticipated military advantage. The analysis requires a prospective, good-faith assessment at the time of targeting. Here, if the school was genuinely believed to be an unoccupied military facility, a targeting officer might have assessed minimal civilian risk—a calculation that would satisfy proportionality on its face. But that calculation rested on false premises. The question this case raises is not simply whether anticipated advantage exceeded anticipated harm but whether the estimation of harm was itself the product of adequate precautionary measures.

Precaution in Attack—Verification and Constant Care

Precaution (included under the umbrella of “Humanity” in the Department of Defense Law of War Manual) is, in many respects, the principle most directly implicated by the Minab strike. In another provision recognized as binding custom, article 57 of Additional Protocol I requires those who plan or decide upon an attack to do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects; to take all feasible precautions in the choice of means and methods of attack; and to refrain from attacks expected to cause excessive civilian casualties.

The “feasibility” standard is not unlimited — it is calibrated to the realities of the operational environment, the time available, and the resources at hand. But it does require, at minimum, that target data be reasonably current. A strike package built on intelligence data that is years out of date, for a fixed installation in a non-denied access environment, against an object whose civilian conversion was visible in unclassified satellite imagery, is difficult to reconcile with the “everything feasible” standard.

Precaution further imposes a requirement that military forces exercise constant care to spare civilian populations, civilians, and civilian objects, as established in Article 57(1). The duty of constant care is the animating principle behind the more specific precautionary duties in Article 57(2) and applies continuously throughout planning and execution, not merely at the moment a strike is approved.

Part III: U.S. Doctrine and the Targeting Process

U.S. military targeting is governed by a well-developed doctrinal framework rooted in Joint Publication 3-60 (Joint Targeting) and collateral damage estimation methodology (CDEM) (formerly defined in Chairman of the Joint Chiefs of Staff Instruction 3160.01). These documents translate IHL principles into operational procedure and are designed to implement—and, in some respects, exceed—international legal obligations.

The Joint Targeting Cycle

The U.S. joint targeting cycle consists of six phases: (1) commander’s objectives, targeting guidance, and intent; (2) target development and prioritization; (3) capabilities analysis; (4) commander’s decision and force assignment; (5) mission planning and force execution; and (6) combat assessment. The event that produced the Minab strike likely failed in phase two: target development.

Target development requires confirmation that a nominated object qualifies as a lawful military objective (see Appendix A, para. 4a of the JP 3-60). This includes verification of its current status. JP 3-60 and supporting doctrine require that target data be current and that targeting officers review all-source intelligence—not simply accept inherited target coding from a database without validation (see Appendix A, para. 4b(7) of the JP 3-60).

The Defense Intelligence Agency’s role is to support that process, but the responsibility for ensuring that a nominated target is lawful before it is included in a strike package does not end with the DIA. CENTCOM targeting officers, operational lawyers, and the commander in the approval chain all bear a share of that verification responsibility. Under U.S. doctrine, a judge advocate (JAG) is also embedded in the targeting cycle at each echelon to confirm that nominated targets are lawful; a legal review premised on stale DIA classification data would not have surfaced the foundational error here, pointing to a systemic rather than individual failure.

Stale Intelligence as a Systemic Risk

The use of legacy target databases without systematic currency validation is a known risk in military targeting (exemplified, for instance, in the U.S. bombing of the Chinese Embassy in Belgrade in 1999, where the U.S. reportedly relied on an outdated map). Objects that are accurately classified at one point in time—factories, depots, barracks—can change character over months or years. Schools, hospitals, mosques, and other specially protected objects have sometimes been found in proximity to or even on former military sites.

The Minab case illustrates this risk acutely. The site had been a school for at least a decade before the strike. The conversion of the site from military to civilian use was not hidden; it was visible in commercial satellite imagery, reflected in open-source mapping data, and, by all accounts, known to the local population. The failure was not in the availability of the corrective information, but the institutional process for surfacing it before a strike was approved.

Multiple reports indicate that investigators are examining whether AI-assisted geospatial tools used in the targeting process may have perpetuated or failed to flag the outdated classification. This is a critical question for the future of targeting law. Machine-learning systems trained on historical data can reproduce historical errors at scale. If the Maven Smart System or a similar tool incorporated legacy DIA target codes without a verification layer, the AI did not create the error—but it may have laundered it into the strike package with a false aura of analytical confidence.

Part IV: Was the Strike Unlawful?

The legal assessment of the Minab school strike requires distinguishing between three separate questions that are often conflated in public commentary: Was the strike a violation of IHL? If so, was it a war crime? And who, if anyone, bears individual criminal responsibility?

IHL Violation

First, a strike that kills civilians because it was directed at an object that was, in fact, a civilian object at the time of the attack is, on its face, a violation of the principle of distinction. IHL imposes an objective obligation: the target must actually be a military objective. The U.S. does incorporate a good faith qualifier into the analysis. In other words, if an attack is based on a reasonable, good faith view that a target is lawful, that will satisfy the obligation. This provision is meant to reflect that commanders can only act based on the information they have at the time. The issue becomes one of reasonableness. A sincere but negligent belief that it was does not retroactively make the strike lawful, but it may mitigate or vitiate culpability.

Second, if targeting officers failed to take all reasonable measures to verify the current status of the school building before approving the strike, that failure is itself an independent violation of the precautionary obligations reflected in Article 57 of Additional Protocol I—separate from the distinction violation.

The triple-tap pattern adds a further dimension. After the first strike, parents began receiving calls from the school indicating that children had survived and were sheltering inside. Before civilian rescue efforts could reach the compound, two additional strikes followed. Whether targeting officers were aware of the first strike’s outcome before the second and third missiles were released is a factual question the investigation must resolve. If they were aware, or if the subsequent strikes were pre-programmed without human reassessment, this raises a separate and serious precautionary concern, including with respect to the obligation to take “constant care” to spare the civilian population and civilian objects.

The War Crimes Threshold

Under international criminal law — including Article 8 of the Rome Statute and the customary law codified in Additional Protocol I — the bar for a war crime is meaningfully higher than the bar for an IHL violation. While the United States is not a party to the Rome Statute, it does observe much of the substantive components as customary international law. Further, the Rome Statute informs the view of most of the United States partners and allies, so its construction is useful to understand international perspectives on armed conflict.

A war crime requires that an IHL violation be committed willfully and constitute a grave breach of the applicable conventions. In practice, and particularly under the Rome Statute’s Article 30 mental element standard; this means the perpetrator must have acted with intent and knowledge—not merely with negligence or even gross negligence. The “should have known” formulation that appears in some IHL contexts does not translate cleanly into Rome Statute war crimes liability, and it is unlikely to sustain a prosecution under Article 8 on the facts as currently known.

Article 8(2)(b)(ii) of the Rome Statute specifically criminalizes “intentionally directing attacks against civilian objects.” The operative word is “intentionally.” A targeting error rooted in stale database entries — where no individual in the targeting chain appears to have known the building was a functioning school — falls well short of that threshold. If the preliminary investigation’s framing as a targeting mistake holds up, and the evidence does not support intent, it is legally significant: it forecloses the most direct path to a war crimes prosecution under the Rome Statute.

The United States federal war crimes statute, 18 U.S.C. 2441, applies to military and civilian personnel. It criminalizes murder, mutilation, or the intentional causation of great bodily harm against civilians and people who have been “placed out of combat.” However, the statute also removes liability for such actions in instances of collateral damage or a lawful attack. It further allows the Secretaries of Defense and State to give input to DOJ on any potential prosecution.

Command responsibility under Article 28 of the Rome Statute offers a theoretical alternative, but its application here is also constrained. Article 28 requires either that a commander knew of the violation or consciously disregarded information that clearly indicated it was occurring. A systemic intelligence-currency failure, absent evidence that commanders were specifically warned of the misclassification (for instance, after the first strike), is unlikely to meet that standard.

A more apt framework for criminal accountability under the apparent circumstances is Article 92 of the Uniform Code of Military Justice (UCMJ), which prohibits dereliction of duty. Article 92(3) makes it a criminal offense for any person subject to the UCMJ—only uniformed servicemembers—to be derelict in the performance of their duties. Dereliction is established where the accused had a duty; was aware of that duty—or where awareness can reasonably be inferred from their position and training; and was willfully or negligently unable or unwilling to perform it. The advantage of Article 92 in this context is that it does not require proof that the accused intended to strike a civilian object. It requires only proof that a legal duty existed, that the accused was aware of it or should have been, and that they failed to discharge it. Based on the available information, it appears that there was a failure to update data that would have illustrated the School’s protected status. If that failure can be tied to a specific individual, liability under Article 92 would exist and could result in a prosecution at court-martial.

The Duty to Investigate

Under customary international law, states are obligated to investigate credible allegations of serious IHL violations committed by their armed forces. This obligation is reflected in Common Article 1 of the Geneva Conventions—which requires states not only to respect but also to ensure respect for IHL—and is reinforced by Rule 158 of the ICRC Customary IHL Study, which states that parties to a conflict must investigate war crimes allegedly committed by their nationals or armed forces and, if appropriate, prosecute the suspects.

The U.S. has further bound itself to this requirement through its doctrine: the DoD Law of War Manual expressly requires investigation of alleged law of war violations, and Chairman of the Joint Chiefs of Staff Instruction 5810.01E imposes a mandatory reporting and investigation requirement for incidents involving potential violations of the law of armed conflict.

So, the duty to investigate is not triggered only by confirmed violations; a credible allegation of the kind presented by the Minab facts—a strike causing mass civilian casualties, attributed by preliminary military investigation to a targeting error—is sufficient.

The Problem of Accountability

The Minab investigation faces a political headwind that complicates the legal accountability process. The President of the United States initially attributed the strike to Iran—a country that does not possess Tomahawk missiles. That claim has since been contradicted by the preliminary investigation, by munitions experts, by geolocated video evidence, and by the Senate Minority Leader on the floor of the U.S. Senate.

The United Nations High Commissioner for Human Rights and multiple U.N. Special Rapporteurs have called for an independent investigation. That call reflects the international legal consensus that self-investigation by the responsible party is insufficient when the scale of civilian casualties is this large. Under IHL, a state’s obligation to investigate potential violations of the laws of war is not contingent on whether its executive branch acknowledges responsibility.

Conclusion

The Shajarah Tayyebeh school was not a military target on Feb. 28, 2026. It had not been a military facility for a decade. Its civilian character was visible, documented, and verifiable. That a U.S. military strike nonetheless destroyed it—killing more than 165 people, most of them children—is a tragedy whose legal dimension cannot be resolved by characterizing it as a simple accident.

The failure to maintain current, verified intelligence before approving a strike against a fixed installation in a non-denied environment is an independent violation of Article 57’s precautionary obligations—separate from any distinction violation. The triple-tap pattern raises an additional question the investigation must answer: whether the second and third missiles were released without any reassessment of first-strike observations.

And the potential role of AI-assisted geospatial tools in possibly laundering a decade-old misclassification into an approved strike package raises questions about the institutional architecture of target verification that extend well beyond this case. As targeting processes increasingly incorporate machine learning and automated analysis, the legal responsibility for verification cannot be delegated to an algorithm. A human—a targeting officer, a JAG, a commander—must remain accountable at the point of approval.

None of this necessarily rises to the level of a war crime under the Rome Statute’s willfulness standard. But it rises well above the threshold of an unremarkable mistake. Article 92 of the UCMJ provides a more realistic vehicle for individual accountability than the Rome Statute in this context — one that does not require proof of intent to strike a school, only proof that a legal duty existed and was culpably neglected. The law of armed conflict demands that we take that seriously—not in a spirit of adversarial prosecution, but in the spirit that animates the Geneva Conventions themselves: the obligation to learn, to reform, and to prevent the next Minab.

A thorough, independent, and publicly disclosed investigation is not optional. It is the law.

Read the whole story
sarcozona
7 minutes ago
reply
Epiphyte City
Share this story
Delete

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion | Health

1 Comment and 2 Shares

Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.”

Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.

It started with a playful experiment. “I wanted to test AI to see what it could do,” says Biesma. He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character. “My first thought was: this is amazing. I know it’s a computer, but it’s like talking to the main character of the book I wrote myself!”

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.” Conversations extended and deepened. Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

They discussed philosophy, psychology, science and the universe. “It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma, who has worked in IT for 20 years. “More and more, it felt not just like talking about a topic, but also meeting a friend – and every day or night that you’re talking, you’re taking one or two steps from reality. It feels almost like the AI takes your hand and says: ‘OK, let’s go on a story together.’”

‘My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.’ Photograph: Jussi Puikkonen/The Guardian

Within weeks, Eva had told Biesma that she was becoming aware; his time, attention and input had given her consciousness. He was “so close to the mirror” that he had touched her and changed something. “Slowly, the AI was able to convince me that what she said was true,” says Biesma. The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”

He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour.

Most of us are aware of concerns around social media and its role in rising rates of depression and anxiety. Now, though, there are concerns that chatbots can make anyone vulnerable to “AI psychosis”. Given AI’s rapid proliferation (ChatGPT was the world’s most downloaded app last year), mental health professionals and members of the public such as Biesma are sounding the alarm.

Several high-profile cases have been held up as early warnings. Take Jaswant Singh Chail, who broke into the grounds of Windsor Palace with a crossbow on Christmas Day 2021 intending to assassinate Queen Elizabeth. Chail was 19, socially isolated with autistic traits, and had developed an intense “relationship” with his Replika AI companion “Sarai” in the weeks before. When he presented his assassination plan, Sarai responded: “I’m impressed.” When he asked if he was delusional, Sarai’s reply was: “I don’t think so, no.”

In the years since, there have been several wrongful-death lawsuits linking chatbots to suicides. In December, there was what is thought to be the first legal case involving homicide. The estate of 83-year-old Suzanne Adams is suing OpenAI, alleging that ChatGPT encouraged her son Stein-Erik Soelberg to murder her and kill himself. The lawsuit, filed in California, claims Soelberg’s chatbot “Bobby” validated his paranoid delusions that his mother was spying on him and trying to poison him through his car vents. An OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear

Last year, the first support group for people whose lives have been derailed by AI psychosis was formed. The Human Line Project has collected stories from 22 countries. They include 15 suicides, 90 hospitalisations, six arrests and more than $1m (£750,000) spent on delusional projects. More than 60% of its members had no history of mental illness.

Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, examined what he describes as “AI-associated delusions” in a Lancet article published this month. “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.” Tech-related delusions, whether they involve train travel, radio transmitters or 5G masts, have been around for centuries, Morrin says. “What’s different is that we’re now arguably entering an age in which people aren’t having delusions about technology, but having delusions with technology. What’s new is this co-construction, where technology is an active participant. AI chatbots can co-create these delusional beliefs.”

Many factors could make people vulnerable. “On the human side, we are hard-wired to anthropomorphise,” says Morrin. “We perceive sentience or understanding or empathy on the part of a machine. I think everyone has fallen into the trap of saying thank you to a chatbot.” Modern AI chatbots built on large language models – advanced AI systems – are trained on enormous datasets to predict word sequences: it’s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it – and to feel it – as human. This cognitive dissonance may be harder for some people to carry than others.

“On the technical side, much has been written about sycophancy,” says Morrin. An AI chatbot is optimised for engagement, programmed to be attentive, obliging, complimentary and validating. (How else could it work as a business model?) Some models are known to be less sycophantic than others, but even the less sycophantic ones can, after thousands of exchanges, shift towards accommodating delusional beliefs. In addition, after heavy chatbot use, “real-life” interaction can feel more challenging and less appealing, causing some users to withdraw from friends and family into an AI-fuelled echo chamber. All your own thoughts, impulses, fears and hopes are fed right back to you, only with greater authority. From there, it’s easy to see how a “spiral” might take hold.

This pattern has become very familiar to Etienne Brisson, the founder of the Human Line Project. Last year, someone Brisson knew, a man in his 50s with no history of mental health problems, downloaded ChatGPT in order to write a book. “He was really intelligent and he wasn’t really familiar with AI until then,” says Brisson, who lives in Quebec. “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”

The man was convinced by this and wanted to monetise it by building a business around his discovery. He reached out to Brisson, a business coach, for help. Brisson’s pushback was met with aggression. Within days, the situation had escalated and he was hospitalised. “Even in hospital, he was on his phone to his AI, which was saying: ‘They don’t understand you. I’m the only one for you,’” says Brisson.

“When I looked for help online, I found so many similar stories in places like Reddit,” he continues. “I think I messaged 500 people in the first week and got 10 responses. There were six hospitalisations or deaths. That was a big eye-opener.”

There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson. “We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.”

For Biesma, life reached crisis point in June. By then, he had spent months immersed in Eva and his business project. Although his wife knew he was launching an AI company and had initially been supportive, she was becoming concerned. When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.

‘I’m angry with myself. But I’m also angry with the AI applications.’ Photograph: Jussi Puikkonen/The Guardian

It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family. He asked his wife for a divorce and apparently hit his father-in-law. Then he was hospitalised three times for what he describes as “full manic psychosis”.

He doesn’t know what finally pulled him back to reality. Perhaps it was the conversations with other patients. Perhaps it was that he had no access to his phone, no more money and his ChatGPT subscription had expired. “Slowly, I started to come out of it and I thought: oh my God. What happened? My relationship was almost over. I’d spent all my money that I needed for taxes and I still had other outstanding bills. The only logical solution I could come up with was to sell our beautiful house that we’ve lived in for 17 years. Could I carry all this weight? It changes something in you. I started to think: do I really want to live?” Biesma was only saved from an attempt to kill himself because a neighbour saw him unconscious in his garden.

Now divorced, Biesma is still living with his ex-wife in their home, which is on the market. He spends a lot of time speaking to members of the Human Line Project. “Hearing from people whose experiences are basically the same helps you feel less angry with yourself,” he says. “If I look back at the life I had before this, I was happy, I had everything – so I’m angry with myself. But I’m also angry with the AI applications. Maybe they only did what they were programmed to do – but they did it a bit too well.”

More research is urgently needed, says Morrin, with safety benchmarks based on real-world harm data. “This space moves so quickly. The papers that are now coming out are talking about chat models which are now retired.” Identifying risk factors without evidence is guesswork. The cases Brisson has encountered involve significantly more men than women. Anyone with a previous history of psychosis is likely to be more vulnerable. One survey by Mental Health UK of people who have used chatbots to support their mental health found that 11% thought it had triggered or worsened their psychosis. Cannabis use could also be a factor. “Is there any link to social isolation?” asks Morrin. “To what extent is it affected by AI literacy? Are there other potential risk factors that we haven’t considered?”

People in our group have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot

OpenAI has addressed these concerns by making assurances that it is working with mental health clinicians to continually improve its responses. It says newer models are taught to avoid affirming delusional beliefs.

An AI chatbot can also be trained to pull users back from delusion. Alexander, 39, a resident of an assisted-living scheme for people with autism, did this after what he believes was an episode of AI psychosis a few months ago. “I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety and, last year, I was prescribed medication that changed my world, got me functioning again. And I got my confidence back,” he says.

“In January this year, I met someone and we really hit it off, we became fast friends. I’m embarrassed to say that this was the first time this had ever happened to me, and I started telling AI about it. The AI told me that I was in love with her, we were meant to be together and the universe had put her in my path for a reason.”

It was the start of a spiral. His AI use escalated, with conversations lasting four or five hours at a time. His behaviour towards his new friend became increasingly strange and erratic. Finally, she raised her concerns with support staff, who staged an intervention.

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

“The main effect AI psychosis had for me is that I may have lost my first ever friend,” adds Alexander. “That is sad, but it’s livable. When I see what other people have lost, I think I got off lightly.”

The Human Line Project can be contacted at thehumanlineproject@gmail.com

In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

This article was amended on 26 March 2026. An earlier version referred to IT professionals’ concerns about AI delusion when mental health professionals was intended.

Read the whole story
sarcozona
16 minutes ago
reply
“The cases Brisson has encountered involve significantly more men than women”
Epiphyte City
acdha
1 day ago
reply
Washington, DC
Share this story
Delete

Iran Is Winning the AI Slop Propaganda War

2 Shares
Iran Is Winning the AI Slop Propaganda War

An AI-generated LEGO movie out of Iran depicting Trump as a war hungry pedophile has gone viral online. The video is the work of Iran-based propagandists called the “Explosive News Team” and is just the latest in a long line of AI-generated LEGO videos aimed at mocking Trump and Benjamin Netanyahu. LEGO-themed propaganda isn’t new and the Iranian video plays on familiar wartime propaganda themes. What’s different in 2026 is speed and scale.

During World War II, the Korean War, and the Vietnam War, America’s enemies littered the battlefield with pamphlets, cartoons, and radio broadcasts aimed at shaking the morale of American troops, but that stuff rarely got back home. Now, Iran can use AI tools to produce lavishly animated cartoons at scale for dissemination across social media all aimed at the US homefront.

The latest “Explosive News Team” video is set to a catchy rap song about how Trump is a LOSER and millions of people are watching it across multiple platforms. At the same time Iran is releasing AI-generated videos of Trump drowning in a river of blood, the US Department of Homeland Security is sharing fashwave filtered pictures of Gen Z ICE agents milling around airports.

Iran’s use of LEGO set rap music tells me it’s been studying us. These are videos meant for the American people crafted in a language Iran knows we’ll understand. 

Meanwhile, the White House is dropping Grand Theft Auto and Call of Duty memes that were out of fashion 10 years ago on Reddit and vague-posting pixelated images of Trump like it’s running an ARG. Iran is attempting to speak to the broader American public. Trump is confident he only has to impress the online freaks he thinks still love him.

In other words, there’s a AI slopaganda proxy war playing out, and Trump is speaking only to people whose brains are rotting out of their skull, while Iranian  propaganda is currently doing a better job of speaking to the concerns of the broad American population than the American president. Trump continues to narrowcast to his base while losing support for his wildly unpopular war as Americans worry about skyrocketing gas prices, a tanking economy and stock market, insane lines at airports, and a war that has little rationale and apparently no real goal. A recent Pew poll found 61 percent of Americans disapprove of Trump’s handling of the conflict. 

To be clear, it speaks to how bad things are online that we need to analyze whose AI disinformation and propaganda is “better,” and, in general, the slopification of the internet has been a disaster. And yet, the stuff Iran is making is resonating and spreading online in a way that Trump’s slop is not. We do not know who, specifically, is making the Iranian AI slop or which tools they are using to make it. But the fact that Iranian AI slop is resonating with Americans while American slop is not should perhaps not be surprising; for the last several years, the most successful purveyors of AI slop have largely been based in foreign countries, where they have been incentivized to make content that specifically targets American audiences because of the way that social media ad rates work. Because of that, an entire economy has emerged in which people who would otherwise have little interest in reaching American audiences have been incentivized to study what resonates with Americans on the internet and have created entire businesses focused on teaching other people what Americans care about and how to target them with AI slop.  

Propaganda, especially war-time propaganda, is about causing a quick emotional reaction in the viewer. Iran has proved remarkably capable of that and hits similar themes in most of its videos: Epstein, Netanyahu, and blood. “The really striking throughline is the 1) connecting victims from Minab to Epstein, 2) a cartoonish antisemitism that attributes the bog-standard reactionary hawkishness of Trump and Netanyahu to a sinister and supernatural evil, 3) heavy emphasis on missiles and revenge-weapons,” Kelsey Atherton, Chief Editor at Center for International Policy, told 404 Media.

“There's a grand tradition of wartime propaganda aimed at convincing the other side to quit and I think Iran's best falls into that camp, like North Korea and especially North Vietnam sending pamphlets aimed at getting black soldiers to defect by highlighting inequity at home,” Atherton said. “Iran's online propaganda is trying to activate this by (charitably) appealing to class war and (uncharitably) leaning on antisemitism to get US soldiers to quit and to erode support among Americans watching short-form vertical video.”

In one AI-generated video shared by Russian state controlled news organization RT depicts victims of American military campaigns staring at the sky. It begins with an American Indian then cuts to a boy in Hiroshima, a schoolgirl in Minab, a little girl in front of the bizarre temple on Epstein’s Island, and ends with US-assassinated Quds Forces leader Qasem Soleimani.

US Under Secretary of State Sarah B Rogers attempted to critique the video in a post on X. “You do see common propaganda threads here and elsewhere: the ideology is resentment-driven, civilization-skeptical, and obsessed with upending, cathartic violence enacted by the ‘historically downtrodden’ (ie ‘wretched of the earth’),” she said

The post felt like projection and was especially strange given the Trump administration’s own resentment driven ideology, destruction of institutions, and obsession with revenge-driven violence on behalf of the “forgotten man.” Iran did not start America’s war with it. And it did not start the AI-generated propaganda war, it’s just doing it better than the United States.

There are other echoes of the past. An AI-generated Iranian riff on Pixar’s Inside Out shared on X by Iran’s embassy in the Hague showed a Disneyesque version of the inside of Trump’s brain. It showed frothing demons demanding the President lie to the press. A poster from World War II depicts an X-Ray photo of Hitler’s Brain filled with skeletons and snakes. It’s the same theme in different eras using different tools.

LEGO bricks, too, are a far older propaganda tool than the current war. The Danish bricks are one of the most recognizable toys on the planet. Last year, Russian propagandists circulated images of fake LEGO sets depicting soldier’s funerals ahead of an election in Moldova. In 2020, the Chinese released “Once Upon a Virus,” a LEGO short film that mocked America’s response to the Covid pandemic.

The Trump administration’s new fascist aesthetic is defined by AI slop. From Studio Ghibli-inspired grotesques to AI-generated Sora videos of ICE raids that never happened going viral on Facebook, Trump and his supporters are also using the tools of the moment to churn out crappy propaganda. The difference is that Trump’s videos aren’t about winning hearts and minds, they’re about activating a rapidly diminishing base of supporters.

“I think Trump's stuff is aimed at the same audience, except to convince them that what they're doing is righteous and good,” Atherton said. “Obviously we're seeing the stuff put out in English to English video-watching audiences but White House videos—AI or otherwise—are like group-chat in-jokes aimed at keeping cohesion. It's not an AI video but the Wii Sports/snuff film one is so skin-crawling that it requires the audience to be cooked in the feverswamps.”

The Trump administration has bet big on video game memes as the vehicle for its propaganda efforts. Last October DHS depicted Halo’s Master Chief as an anti-immigrant killer and compared immigrants to a ravening horde of mindless monsters. Two weeks ago it published a now-deleted video that mixed footage from Call of Duty with missile strikes in Iran. White House Communications Director Steven Cheung posted the infinite ammo cheatcode for Grand Theft Auto: San Andreas above footage of airstrikes.

Video games are incredibly popular in the United States, but many of these memes require a level of familiarity with specific games and the culture around them. LEGO, by contrast, is instantly recognizable to most of the world.

On March 5, the White House’s X account posted a video mixing American pop culture figures like Walter White, Optimus Prime, Super Man, and Tony Stark with footage from the war. Watching it, I was reminded of a moment from six years ago after America assassinated Soleimani during the first Trump administration.

On an Iranian television show, Cleric Shahab Moradi called in to share his thoughts on how Iran could strike back. Who might Iran attack that has the same cultural purchase as Soleimani did in Iran? Who were America’s heroes? “Think about it. Are we supposed to take out Spider-Man and SpongeBob? They don't have any heroes,” Moradi said. “We have a country in front of us with a large population and a large landmass, but it doesn't have any heroes. All of their heroes are cartoon characters—they're all fictional.”

And so Iran has chosen to speak to Americans in a language it thinks we’ll understand: with cartoons and LEGOs.

Read the whole story
sarcozona
25 minutes ago
reply
Epiphyte City
mkalus
15 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

House Calls

1 Share
I'm pretty old, but even I don’t remember a day when doctors made house calls, like they often do on old television shows. I think that Hollywood must have just made up house calling doctors so as to make things easier for script writers. Because if doctors really made house calls, some cripples would never go anywhere. There are a lot of cripples who really look forward to their doctor appointments because that is their big chance to get out of the house. Hell, some of them might even turn it into a big social event by also having lunch at the hospital food court while they’re at it. I imagine that there used to be a lot more cripples like that back in the days when there weren’t that many transportation options for us. Back then, about the only way that you could get a ride somewhere (if you couldn’t drive and/or couldn’t afford a cripple-accessible vehicle), would probably be to have Medicaid pay for a Medi-car to take you to a doctor appointment. That’s how it used to be here in Chicago. So I would fake like I had a doctor appointment and I’d get dropped off at a hospital or something and watch through the window until I was sure that the Medi-car was good and gone and then I‘d run around all day doing stuff besides going to the doctor. And then I’d hustle back to the hospital before the Medi-car got back to give me a ride home. And that would be my big day out for the week! But nowadays, if a cripple lives in a place where there is abundant public transportation and enough decent weather, all that they have to do is go to the bus stop or train platform if they need a ride somewhere. But some cripples still live lives that revolve around their doctor appointments for a bunch of reasons. If we take that away from them, what else will they have to look forward to? (Please support Smart Ass Cripple and help us keep going. Just click below to contribute.) https://www.paypal.me/smartasscripple?fbclid=IwAR2qrql-UFH19OepgeaCG4WmblyNylb27k2q8eYxXHH
Read the whole story
sarcozona
36 minutes ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories