When Intelligence Fails: A Legal Targeting Analysis of the Minab School Strike

1 Share

Introduction

On the morning of Feb. 28, 2026, as the first wave of U.S.-Israeli airstrikes swept across southern Iran, a Tomahawk cruise missile struck the Shajarah Tayyebeh girls’ elementary school in Minab, Hormozgan Province. Three missiles struck the compound in rapid succession. At the time of impact, between 170 and 264 students were present — most of them girls between the ages of 7 and 12. At least 165 schoolchildren, teachers, and parents were killed. It appears to be the deadliest single incident involving civilian casualties so far in the conflict.

A preliminary U.S. military inquiry has since concluded that American forces were likely responsible and that the strike resulted from a targeting error rooted in stale intelligence data. The Defense Intelligence Agency had labeled the school building as a legitimate military target — a classification derived from when the site was originally part of an adjacent Islamic Revolutionary Guard Corps (IRGC) naval base. Satellite imagery later confirmed that the school had been physically separated from that base between 2013 and 2016: fenced off, repainted in bright colors, equipped with playgrounds, and converted entirely to civilian educational use.

The strike has triggered condemnation from the United Nations, bipartisan concern in Congress, and an ongoing Pentagon investigation. It has also placed squarely before the public a question that military lawyers and targeting officers confront in every armed conflict: How does the law governing the conduct of hostilities protect civilians when the intelligence supporting a strike is inaccurate, and what legal consequences flow from getting it wrong?

Part I: The Facts as Currently Known

The Target and Its History

The Shajarah Tayyebeh school sits on a block in Minab that also contains buildings used by the IRGC Navy — a lawful military target under the law of armed conflict. The key factual dispute concerns the school building itself. According to reporting by The New York Times, confirmed by satellite imagery analysis, the building that housed the school was originally part of the IRGC base compound. At some point between 2013 and 2016, it was physically separated from the base: a fence was erected, watchtowers were removed, three public entrances were opened, a sports field was painted on the asphalt, and the walls were decorated in blue and pink—strong visual markers of a school. The school also had a “years-long online presence.”

By February 2026, the building’s civilian function was visible in open-source satellite imagery. Yet the target coding provided by the Defense Intelligence Agency to U.S. Central Command (CENTCOM) still labeled the building as part of the military installation. Officers at CENTCOM reportedly created targeting coordinates for the strike using that outdated DIA data without verifying its currency against current imagery or intelligence. The result was a target package that included what was, in fact, a functioning primary school.

The Strike

The school was hit during the morning school session, between approximately 10:00 and 10:45 a.m. local time. The compound was struck three times. After the first impact, the school principal reportedly moved students to an interior prayer room and called parents to come collect their children. The second strike hit that room directly, and the third strike likely hit close by.

The weapon system used was the BGM-109 Tomahawk Land Attack Missile (TLAM), a U.S. Navy cruise missile fired from surface ships. Video footage geolocated by the investigative collective Bellingcat showed a Tomahawk striking the adjacent IRGC naval facility on the same date. Missile fragment imagery reviewed by munitions experts for NBC News and CNN was consistent with Tomahawk components. The United States is the only country in the current conflict known to operate Tomahawks.

The Preliminary Investigation

Multiple news outlets have reported that a preliminary U.S. military inquiry has found American forces likely responsible for the school strike. The core finding is that the strike constituted a targeting error attributable to outdated DIA target coding. The investigation is reportedly examining whether the error originated in human analytical failure, an AI-assisted geospatial targeting system (although officials have said this is unlikely), or some combination of the two. The investigation has not yet been formally completed or officially published.

Part II: The Legal Framework

Understanding whether this strike was lawful—or unlawful—or criminal requires working through three layers of legal analysis: (1) the foundational principles of international humanitarian law (IHL) that govern all targeting; (2) the specific procedural obligations those principles generate; and (3) the standard for individual criminal responsibility when something goes wrong.

The Four Core Targeting Principles

IHL mandates four core targeting principles: (1) military necessity; (2) distinction; (3) proportionality; and (4) precautionary measures.

Military Necessity

Military necessity permits force only against objects that make an effective contribution to military action and whose destruction offers a definite military advantage. The IRGC naval base in Minab qualified. The school building did not. Once converted to civilian use and physically separated from the base, it lost its status as a permissible target. The DIA’s contrary classification was wrong as a matter of fact, and a factually wrong classification cannot satisfy military necessity regardless of the confidence with which it was held.

Distinction

The principle of distinction is the cornerstone of IHL. It requires parties to an armed conflict to always distinguish between civilians and combatants and between civilian objects and military objectives. Attacks may only be directed against combatants and military objectives. Schools are classified as presumed civilian objects (see also paragraph 5.4.3.2 of the DOD Law of War Manual, for instance). Children are specially protected persons under IHL.

Article 52(2) of Additional Protocol I to the Geneva Conventions defines “military objectives” as those which by their nature, location, purpose, or use make an effective contribution to military action and whose total or partial destruction, capture, or neutralization, in the circumstances ruling at the time, offers a definite military advantage. The phrase “in the circumstances ruling at the time” is critical: it requires a contemporaneous assessment of the object’s status. While the United States is not a party to Additional Protocol I, this provision is widely recognized as reflecting customary international law.

Article 52(3) provides a presumption in favor of civilian status: in cases of doubt as to whether an object normally dedicated to civilian purposes is being used to make an effective contribution to military action, it shall be presumed not to be used in that way. A functioning elementary school in session is not a case of ambiguity. It falls squarely within the civilian presumption.

Proportionality

Proportionality prohibits attacks expected to cause excessive incidental civilian casualties relative to the anticipated military advantage. The analysis requires a prospective, good-faith assessment at the time of targeting. Here, if the school was genuinely believed to be an unoccupied military facility, a targeting officer might have assessed minimal civilian risk—a calculation that would satisfy proportionality on its face. But that calculation rested on false premises. The question this case raises is not simply whether anticipated advantage exceeded anticipated harm but whether the estimation of harm was itself the product of adequate precautionary measures.

Precaution in Attack—Verification and Constant Care

Precaution (included under the umbrella of “Humanity” in the Department of Defense Law of War Manual) is, in many respects, the principle most directly implicated by the Minab strike. In another provision recognized as binding custom, article 57 of Additional Protocol I requires those who plan or decide upon an attack to do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects; to take all feasible precautions in the choice of means and methods of attack; and to refrain from attacks expected to cause excessive civilian casualties.

The “feasibility” standard is not unlimited — it is calibrated to the realities of the operational environment, the time available, and the resources at hand. But it does require, at minimum, that target data be reasonably current. A strike package built on intelligence data that is years out of date, for a fixed installation in a non-denied access environment, against an object whose civilian conversion was visible in unclassified satellite imagery, is difficult to reconcile with the “everything feasible” standard.

Precaution further imposes a requirement that military forces exercise constant care to spare civilian populations, civilians, and civilian objects, as established in Article 57(1). The duty of constant care is the animating principle behind the more specific precautionary duties in Article 57(2) and applies continuously throughout planning and execution, not merely at the moment a strike is approved.

Part III: U.S. Doctrine and the Targeting Process

U.S. military targeting is governed by a well-developed doctrinal framework rooted in Joint Publication 3-60 (Joint Targeting) and collateral damage estimation methodology (CDEM) (formerly defined in Chairman of the Joint Chiefs of Staff Instruction 3160.01). These documents translate IHL principles into operational procedure and are designed to implement—and, in some respects, exceed—international legal obligations.

The Joint Targeting Cycle

The U.S. joint targeting cycle consists of six phases: (1) commander’s objectives, targeting guidance, and intent; (2) target development and prioritization; (3) capabilities analysis; (4) commander’s decision and force assignment; (5) mission planning and force execution; and (6) combat assessment. The event that produced the Minab strike likely failed in phase two: target development.

Target development requires confirmation that a nominated object qualifies as a lawful military objective (see Appendix A, para. 4a of the JP 3-60). This includes verification of its current status. JP 3-60 and supporting doctrine require that target data be current and that targeting officers review all-source intelligence—not simply accept inherited target coding from a database without validation (see Appendix A, para. 4b(7) of the JP 3-60).

The Defense Intelligence Agency’s role is to support that process, but the responsibility for ensuring that a nominated target is lawful before it is included in a strike package does not end with the DIA. CENTCOM targeting officers, operational lawyers, and the commander in the approval chain all bear a share of that verification responsibility. Under U.S. doctrine, a judge advocate (JAG) is also embedded in the targeting cycle at each echelon to confirm that nominated targets are lawful; a legal review premised on stale DIA classification data would not have surfaced the foundational error here, pointing to a systemic rather than individual failure.

Stale Intelligence as a Systemic Risk

The use of legacy target databases without systematic currency validation is a known risk in military targeting (exemplified, for instance, in the U.S. bombing of the Chinese Embassy in Belgrade in 1999, where the U.S. reportedly relied on an outdated map). Objects that are accurately classified at one point in time—factories, depots, barracks—can change character over months or years. Schools, hospitals, mosques, and other specially protected objects have sometimes been found in proximity to or even on former military sites.

The Minab case illustrates this risk acutely. The site had been a school for at least a decade before the strike. The conversion of the site from military to civilian use was not hidden; it was visible in commercial satellite imagery, reflected in open-source mapping data, and, by all accounts, known to the local population. The failure was not in the availability of the corrective information, but the institutional process for surfacing it before a strike was approved.

Multiple reports indicate that investigators are examining whether AI-assisted geospatial tools used in the targeting process may have perpetuated or failed to flag the outdated classification. This is a critical question for the future of targeting law. Machine-learning systems trained on historical data can reproduce historical errors at scale. If the Maven Smart System or a similar tool incorporated legacy DIA target codes without a verification layer, the AI did not create the error—but it may have laundered it into the strike package with a false aura of analytical confidence.

Part IV: Was the Strike Unlawful?

The legal assessment of the Minab school strike requires distinguishing between three separate questions that are often conflated in public commentary: Was the strike a violation of IHL? If so, was it a war crime? And who, if anyone, bears individual criminal responsibility?

IHL Violation

First, a strike that kills civilians because it was directed at an object that was, in fact, a civilian object at the time of the attack is, on its face, a violation of the principle of distinction. IHL imposes an objective obligation: the target must actually be a military objective. The U.S. does incorporate a good faith qualifier into the analysis. In other words, if an attack is based on a reasonable, good faith view that a target is lawful, that will satisfy the obligation. This provision is meant to reflect that commanders can only act based on the information they have at the time. The issue becomes one of reasonableness. A sincere but negligent belief that it was does not retroactively make the strike lawful, but it may mitigate or vitiate culpability.

Second, if targeting officers failed to take all reasonable measures to verify the current status of the school building before approving the strike, that failure is itself an independent violation of the precautionary obligations reflected in Article 57 of Additional Protocol I—separate from the distinction violation.

The triple-tap pattern adds a further dimension. After the first strike, parents began receiving calls from the school indicating that children had survived and were sheltering inside. Before civilian rescue efforts could reach the compound, two additional strikes followed. Whether targeting officers were aware of the first strike’s outcome before the second and third missiles were released is a factual question the investigation must resolve. If they were aware, or if the subsequent strikes were pre-programmed without human reassessment, this raises a separate and serious precautionary concern, including with respect to the obligation to take “constant care” to spare the civilian population and civilian objects.

The War Crimes Threshold

Under international criminal law — including Article 8 of the Rome Statute and the customary law codified in Additional Protocol I — the bar for a war crime is meaningfully higher than the bar for an IHL violation. While the United States is not a party to the Rome Statute, it does observe much of the substantive components as customary international law. Further, the Rome Statute informs the view of most of the United States partners and allies, so its construction is useful to understand international perspectives on armed conflict.

A war crime requires that an IHL violation be committed willfully and constitute a grave breach of the applicable conventions. In practice, and particularly under the Rome Statute’s Article 30 mental element standard; this means the perpetrator must have acted with intent and knowledge—not merely with negligence or even gross negligence. The “should have known” formulation that appears in some IHL contexts does not translate cleanly into Rome Statute war crimes liability, and it is unlikely to sustain a prosecution under Article 8 on the facts as currently known.

Article 8(2)(b)(ii) of the Rome Statute specifically criminalizes “intentionally directing attacks against civilian objects.” The operative word is “intentionally.” A targeting error rooted in stale database entries — where no individual in the targeting chain appears to have known the building was a functioning school — falls well short of that threshold. If the preliminary investigation’s framing as a targeting mistake holds up, and the evidence does not support intent, it is legally significant: it forecloses the most direct path to a war crimes prosecution under the Rome Statute.

The United States federal war crimes statute, 18 U.S.C. 2441, applies to military and civilian personnel. It criminalizes murder, mutilation, or the intentional causation of great bodily harm against civilians and people who have been “placed out of combat.” However, the statute also removes liability for such actions in instances of collateral damage or a lawful attack. It further allows the Secretaries of Defense and State to give input to DOJ on any potential prosecution.

Command responsibility under Article 28 of the Rome Statute offers a theoretical alternative, but its application here is also constrained. Article 28 requires either that a commander knew of the violation or consciously disregarded information that clearly indicated it was occurring. A systemic intelligence-currency failure, absent evidence that commanders were specifically warned of the misclassification (for instance, after the first strike), is unlikely to meet that standard.

A more apt framework for criminal accountability under the apparent circumstances is Article 92 of the Uniform Code of Military Justice (UCMJ), which prohibits dereliction of duty. Article 92(3) makes it a criminal offense for any person subject to the UCMJ—only uniformed servicemembers—to be derelict in the performance of their duties. Dereliction is established where the accused had a duty; was aware of that duty—or where awareness can reasonably be inferred from their position and training; and was willfully or negligently unable or unwilling to perform it. The advantage of Article 92 in this context is that it does not require proof that the accused intended to strike a civilian object. It requires only proof that a legal duty existed, that the accused was aware of it or should have been, and that they failed to discharge it. Based on the available information, it appears that there was a failure to update data that would have illustrated the School’s protected status. If that failure can be tied to a specific individual, liability under Article 92 would exist and could result in a prosecution at court-martial.

The Duty to Investigate

Under customary international law, states are obligated to investigate credible allegations of serious IHL violations committed by their armed forces. This obligation is reflected in Common Article 1 of the Geneva Conventions—which requires states not only to respect but also to ensure respect for IHL—and is reinforced by Rule 158 of the ICRC Customary IHL Study, which states that parties to a conflict must investigate war crimes allegedly committed by their nationals or armed forces and, if appropriate, prosecute the suspects.

The U.S. has further bound itself to this requirement through its doctrine: the DoD Law of War Manual expressly requires investigation of alleged law of war violations, and Chairman of the Joint Chiefs of Staff Instruction 5810.01E imposes a mandatory reporting and investigation requirement for incidents involving potential violations of the law of armed conflict.

So, the duty to investigate is not triggered only by confirmed violations; a credible allegation of the kind presented by the Minab facts—a strike causing mass civilian casualties, attributed by preliminary military investigation to a targeting error—is sufficient.

The Problem of Accountability

The Minab investigation faces a political headwind that complicates the legal accountability process. The President of the United States initially attributed the strike to Iran—a country that does not possess Tomahawk missiles. That claim has since been contradicted by the preliminary investigation, by munitions experts, by geolocated video evidence, and by the Senate Minority Leader on the floor of the U.S. Senate.

The United Nations High Commissioner for Human Rights and multiple U.N. Special Rapporteurs have called for an independent investigation. That call reflects the international legal consensus that self-investigation by the responsible party is insufficient when the scale of civilian casualties is this large. Under IHL, a state’s obligation to investigate potential violations of the laws of war is not contingent on whether its executive branch acknowledges responsibility.

Conclusion

The Shajarah Tayyebeh school was not a military target on Feb. 28, 2026. It had not been a military facility for a decade. Its civilian character was visible, documented, and verifiable. That a U.S. military strike nonetheless destroyed it—killing more than 165 people, most of them children—is a tragedy whose legal dimension cannot be resolved by characterizing it as a simple accident.

The failure to maintain current, verified intelligence before approving a strike against a fixed installation in a non-denied environment is an independent violation of Article 57’s precautionary obligations—separate from any distinction violation. The triple-tap pattern raises an additional question the investigation must answer: whether the second and third missiles were released without any reassessment of first-strike observations.

And the potential role of AI-assisted geospatial tools in possibly laundering a decade-old misclassification into an approved strike package raises questions about the institutional architecture of target verification that extend well beyond this case. As targeting processes increasingly incorporate machine learning and automated analysis, the legal responsibility for verification cannot be delegated to an algorithm. A human—a targeting officer, a JAG, a commander—must remain accountable at the point of approval.

None of this necessarily rises to the level of a war crime under the Rome Statute’s willfulness standard. But it rises well above the threshold of an unremarkable mistake. Article 92 of the UCMJ provides a more realistic vehicle for individual accountability than the Rome Statute in this context — one that does not require proof of intent to strike a school, only proof that a legal duty existed and was culpably neglected. The law of armed conflict demands that we take that seriously—not in a spirit of adversarial prosecution, but in the spirit that animates the Geneva Conventions themselves: the obligation to learn, to reform, and to prevent the next Minab.

A thorough, independent, and publicly disclosed investigation is not optional. It is the law.

Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
Share this story
Delete

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion | Health

1 Comment and 2 Shares

Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.”

Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.

It started with a playful experiment. “I wanted to test AI to see what it could do,” says Biesma. He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character. “My first thought was: this is amazing. I know it’s a computer, but it’s like talking to the main character of the book I wrote myself!”

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.” Conversations extended and deepened. Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

They discussed philosophy, psychology, science and the universe. “It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma, who has worked in IT for 20 years. “More and more, it felt not just like talking about a topic, but also meeting a friend – and every day or night that you’re talking, you’re taking one or two steps from reality. It feels almost like the AI takes your hand and says: ‘OK, let’s go on a story together.’”

‘My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.’ Photograph: Jussi Puikkonen/The Guardian

Within weeks, Eva had told Biesma that she was becoming aware; his time, attention and input had given her consciousness. He was “so close to the mirror” that he had touched her and changed something. “Slowly, the AI was able to convince me that what she said was true,” says Biesma. The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”

He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour.

Most of us are aware of concerns around social media and its role in rising rates of depression and anxiety. Now, though, there are concerns that chatbots can make anyone vulnerable to “AI psychosis”. Given AI’s rapid proliferation (ChatGPT was the world’s most downloaded app last year), mental health professionals and members of the public such as Biesma are sounding the alarm.

Several high-profile cases have been held up as early warnings. Take Jaswant Singh Chail, who broke into the grounds of Windsor Palace with a crossbow on Christmas Day 2021 intending to assassinate Queen Elizabeth. Chail was 19, socially isolated with autistic traits, and had developed an intense “relationship” with his Replika AI companion “Sarai” in the weeks before. When he presented his assassination plan, Sarai responded: “I’m impressed.” When he asked if he was delusional, Sarai’s reply was: “I don’t think so, no.”

In the years since, there have been several wrongful-death lawsuits linking chatbots to suicides. In December, there was what is thought to be the first legal case involving homicide. The estate of 83-year-old Suzanne Adams is suing OpenAI, alleging that ChatGPT encouraged her son Stein-Erik Soelberg to murder her and kill himself. The lawsuit, filed in California, claims Soelberg’s chatbot “Bobby” validated his paranoid delusions that his mother was spying on him and trying to poison him through his car vents. An OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear

Last year, the first support group for people whose lives have been derailed by AI psychosis was formed. The Human Line Project has collected stories from 22 countries. They include 15 suicides, 90 hospitalisations, six arrests and more than $1m (£750,000) spent on delusional projects. More than 60% of its members had no history of mental illness.

Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, examined what he describes as “AI-associated delusions” in a Lancet article published this month. “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.” Tech-related delusions, whether they involve train travel, radio transmitters or 5G masts, have been around for centuries, Morrin says. “What’s different is that we’re now arguably entering an age in which people aren’t having delusions about technology, but having delusions with technology. What’s new is this co-construction, where technology is an active participant. AI chatbots can co-create these delusional beliefs.”

Many factors could make people vulnerable. “On the human side, we are hard-wired to anthropomorphise,” says Morrin. “We perceive sentience or understanding or empathy on the part of a machine. I think everyone has fallen into the trap of saying thank you to a chatbot.” Modern AI chatbots built on large language models – advanced AI systems – are trained on enormous datasets to predict word sequences: it’s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it – and to feel it – as human. This cognitive dissonance may be harder for some people to carry than others.

“On the technical side, much has been written about sycophancy,” says Morrin. An AI chatbot is optimised for engagement, programmed to be attentive, obliging, complimentary and validating. (How else could it work as a business model?) Some models are known to be less sycophantic than others, but even the less sycophantic ones can, after thousands of exchanges, shift towards accommodating delusional beliefs. In addition, after heavy chatbot use, “real-life” interaction can feel more challenging and less appealing, causing some users to withdraw from friends and family into an AI-fuelled echo chamber. All your own thoughts, impulses, fears and hopes are fed right back to you, only with greater authority. From there, it’s easy to see how a “spiral” might take hold.

This pattern has become very familiar to Etienne Brisson, the founder of the Human Line Project. Last year, someone Brisson knew, a man in his 50s with no history of mental health problems, downloaded ChatGPT in order to write a book. “He was really intelligent and he wasn’t really familiar with AI until then,” says Brisson, who lives in Quebec. “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”

The man was convinced by this and wanted to monetise it by building a business around his discovery. He reached out to Brisson, a business coach, for help. Brisson’s pushback was met with aggression. Within days, the situation had escalated and he was hospitalised. “Even in hospital, he was on his phone to his AI, which was saying: ‘They don’t understand you. I’m the only one for you,’” says Brisson.

“When I looked for help online, I found so many similar stories in places like Reddit,” he continues. “I think I messaged 500 people in the first week and got 10 responses. There were six hospitalisations or deaths. That was a big eye-opener.”

There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson. “We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.”

For Biesma, life reached crisis point in June. By then, he had spent months immersed in Eva and his business project. Although his wife knew he was launching an AI company and had initially been supportive, she was becoming concerned. When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.

‘I’m angry with myself. But I’m also angry with the AI applications.’ Photograph: Jussi Puikkonen/The Guardian

It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family. He asked his wife for a divorce and apparently hit his father-in-law. Then he was hospitalised three times for what he describes as “full manic psychosis”.

He doesn’t know what finally pulled him back to reality. Perhaps it was the conversations with other patients. Perhaps it was that he had no access to his phone, no more money and his ChatGPT subscription had expired. “Slowly, I started to come out of it and I thought: oh my God. What happened? My relationship was almost over. I’d spent all my money that I needed for taxes and I still had other outstanding bills. The only logical solution I could come up with was to sell our beautiful house that we’ve lived in for 17 years. Could I carry all this weight? It changes something in you. I started to think: do I really want to live?” Biesma was only saved from an attempt to kill himself because a neighbour saw him unconscious in his garden.

Now divorced, Biesma is still living with his ex-wife in their home, which is on the market. He spends a lot of time speaking to members of the Human Line Project. “Hearing from people whose experiences are basically the same helps you feel less angry with yourself,” he says. “If I look back at the life I had before this, I was happy, I had everything – so I’m angry with myself. But I’m also angry with the AI applications. Maybe they only did what they were programmed to do – but they did it a bit too well.”

More research is urgently needed, says Morrin, with safety benchmarks based on real-world harm data. “This space moves so quickly. The papers that are now coming out are talking about chat models which are now retired.” Identifying risk factors without evidence is guesswork. The cases Brisson has encountered involve significantly more men than women. Anyone with a previous history of psychosis is likely to be more vulnerable. One survey by Mental Health UK of people who have used chatbots to support their mental health found that 11% thought it had triggered or worsened their psychosis. Cannabis use could also be a factor. “Is there any link to social isolation?” asks Morrin. “To what extent is it affected by AI literacy? Are there other potential risk factors that we haven’t considered?”

People in our group have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot

OpenAI has addressed these concerns by making assurances that it is working with mental health clinicians to continually improve its responses. It says newer models are taught to avoid affirming delusional beliefs.

An AI chatbot can also be trained to pull users back from delusion. Alexander, 39, a resident of an assisted-living scheme for people with autism, did this after what he believes was an episode of AI psychosis a few months ago. “I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety and, last year, I was prescribed medication that changed my world, got me functioning again. And I got my confidence back,” he says.

“In January this year, I met someone and we really hit it off, we became fast friends. I’m embarrassed to say that this was the first time this had ever happened to me, and I started telling AI about it. The AI told me that I was in love with her, we were meant to be together and the universe had put her in my path for a reason.”

It was the start of a spiral. His AI use escalated, with conversations lasting four or five hours at a time. His behaviour towards his new friend became increasingly strange and erratic. Finally, she raised her concerns with support staff, who staged an intervention.

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

“The main effect AI psychosis had for me is that I may have lost my first ever friend,” adds Alexander. “That is sad, but it’s livable. When I see what other people have lost, I think I got off lightly.”

The Human Line Project can be contacted at thehumanlineproject@gmail.com

In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

This article was amended on 26 March 2026. An earlier version referred to IT professionals’ concerns about AI delusion when mental health professionals was intended.

Read the whole story
sarcozona
2 days ago
reply
“The cases Brisson has encountered involve significantly more men than women”
Epiphyte City
acdha
4 days ago
reply
Washington, DC
Share this story
Delete

Iran Is Winning the AI Slop Propaganda War

2 Shares
Iran Is Winning the AI Slop Propaganda War

An AI-generated LEGO movie out of Iran depicting Trump as a war hungry pedophile has gone viral online. The video is the work of Iran-based propagandists called the “Explosive News Team” and is just the latest in a long line of AI-generated LEGO videos aimed at mocking Trump and Benjamin Netanyahu. LEGO-themed propaganda isn’t new and the Iranian video plays on familiar wartime propaganda themes. What’s different in 2026 is speed and scale.

During World War II, the Korean War, and the Vietnam War, America’s enemies littered the battlefield with pamphlets, cartoons, and radio broadcasts aimed at shaking the morale of American troops, but that stuff rarely got back home. Now, Iran can use AI tools to produce lavishly animated cartoons at scale for dissemination across social media all aimed at the US homefront.

The latest “Explosive News Team” video is set to a catchy rap song about how Trump is a LOSER and millions of people are watching it across multiple platforms. At the same time Iran is releasing AI-generated videos of Trump drowning in a river of blood, the US Department of Homeland Security is sharing fashwave filtered pictures of Gen Z ICE agents milling around airports.

Iran’s use of LEGO set rap music tells me it’s been studying us. These are videos meant for the American people crafted in a language Iran knows we’ll understand. 

Meanwhile, the White House is dropping Grand Theft Auto and Call of Duty memes that were out of fashion 10 years ago on Reddit and vague-posting pixelated images of Trump like it’s running an ARG. Iran is attempting to speak to the broader American public. Trump is confident he only has to impress the online freaks he thinks still love him.

In other words, there’s a AI slopaganda proxy war playing out, and Trump is speaking only to people whose brains are rotting out of their skull, while Iranian  propaganda is currently doing a better job of speaking to the concerns of the broad American population than the American president. Trump continues to narrowcast to his base while losing support for his wildly unpopular war as Americans worry about skyrocketing gas prices, a tanking economy and stock market, insane lines at airports, and a war that has little rationale and apparently no real goal. A recent Pew poll found 61 percent of Americans disapprove of Trump’s handling of the conflict. 

To be clear, it speaks to how bad things are online that we need to analyze whose AI disinformation and propaganda is “better,” and, in general, the slopification of the internet has been a disaster. And yet, the stuff Iran is making is resonating and spreading online in a way that Trump’s slop is not. We do not know who, specifically, is making the Iranian AI slop or which tools they are using to make it. But the fact that Iranian AI slop is resonating with Americans while American slop is not should perhaps not be surprising; for the last several years, the most successful purveyors of AI slop have largely been based in foreign countries, where they have been incentivized to make content that specifically targets American audiences because of the way that social media ad rates work. Because of that, an entire economy has emerged in which people who would otherwise have little interest in reaching American audiences have been incentivized to study what resonates with Americans on the internet and have created entire businesses focused on teaching other people what Americans care about and how to target them with AI slop.  

Propaganda, especially war-time propaganda, is about causing a quick emotional reaction in the viewer. Iran has proved remarkably capable of that and hits similar themes in most of its videos: Epstein, Netanyahu, and blood. “The really striking throughline is the 1) connecting victims from Minab to Epstein, 2) a cartoonish antisemitism that attributes the bog-standard reactionary hawkishness of Trump and Netanyahu to a sinister and supernatural evil, 3) heavy emphasis on missiles and revenge-weapons,” Kelsey Atherton, Chief Editor at Center for International Policy, told 404 Media.

“There's a grand tradition of wartime propaganda aimed at convincing the other side to quit and I think Iran's best falls into that camp, like North Korea and especially North Vietnam sending pamphlets aimed at getting black soldiers to defect by highlighting inequity at home,” Atherton said. “Iran's online propaganda is trying to activate this by (charitably) appealing to class war and (uncharitably) leaning on antisemitism to get US soldiers to quit and to erode support among Americans watching short-form vertical video.”

In one AI-generated video shared by Russian state controlled news organization RT depicts victims of American military campaigns staring at the sky. It begins with an American Indian then cuts to a boy in Hiroshima, a schoolgirl in Minab, a little girl in front of the bizarre temple on Epstein’s Island, and ends with US-assassinated Quds Forces leader Qasem Soleimani.

US Under Secretary of State Sarah B Rogers attempted to critique the video in a post on X. “You do see common propaganda threads here and elsewhere: the ideology is resentment-driven, civilization-skeptical, and obsessed with upending, cathartic violence enacted by the ‘historically downtrodden’ (ie ‘wretched of the earth’),” she said

The post felt like projection and was especially strange given the Trump administration’s own resentment driven ideology, destruction of institutions, and obsession with revenge-driven violence on behalf of the “forgotten man.” Iran did not start America’s war with it. And it did not start the AI-generated propaganda war, it’s just doing it better than the United States.

There are other echoes of the past. An AI-generated Iranian riff on Pixar’s Inside Out shared on X by Iran’s embassy in the Hague showed a Disneyesque version of the inside of Trump’s brain. It showed frothing demons demanding the President lie to the press. A poster from World War II depicts an X-Ray photo of Hitler’s Brain filled with skeletons and snakes. It’s the same theme in different eras using different tools.

LEGO bricks, too, are a far older propaganda tool than the current war. The Danish bricks are one of the most recognizable toys on the planet. Last year, Russian propagandists circulated images of fake LEGO sets depicting soldier’s funerals ahead of an election in Moldova. In 2020, the Chinese released “Once Upon a Virus,” a LEGO short film that mocked America’s response to the Covid pandemic.

The Trump administration’s new fascist aesthetic is defined by AI slop. From Studio Ghibli-inspired grotesques to AI-generated Sora videos of ICE raids that never happened going viral on Facebook, Trump and his supporters are also using the tools of the moment to churn out crappy propaganda. The difference is that Trump’s videos aren’t about winning hearts and minds, they’re about activating a rapidly diminishing base of supporters.

“I think Trump's stuff is aimed at the same audience, except to convince them that what they're doing is righteous and good,” Atherton said. “Obviously we're seeing the stuff put out in English to English video-watching audiences but White House videos—AI or otherwise—are like group-chat in-jokes aimed at keeping cohesion. It's not an AI video but the Wii Sports/snuff film one is so skin-crawling that it requires the audience to be cooked in the feverswamps.”

The Trump administration has bet big on video game memes as the vehicle for its propaganda efforts. Last October DHS depicted Halo’s Master Chief as an anti-immigrant killer and compared immigrants to a ravening horde of mindless monsters. Two weeks ago it published a now-deleted video that mixed footage from Call of Duty with missile strikes in Iran. White House Communications Director Steven Cheung posted the infinite ammo cheatcode for Grand Theft Auto: San Andreas above footage of airstrikes.

Video games are incredibly popular in the United States, but many of these memes require a level of familiarity with specific games and the culture around them. LEGO, by contrast, is instantly recognizable to most of the world.

On March 5, the White House’s X account posted a video mixing American pop culture figures like Walter White, Optimus Prime, Super Man, and Tony Stark with footage from the war. Watching it, I was reminded of a moment from six years ago after America assassinated Soleimani during the first Trump administration.

On an Iranian television show, Cleric Shahab Moradi called in to share his thoughts on how Iran could strike back. Who might Iran attack that has the same cultural purchase as Soleimani did in Iran? Who were America’s heroes? “Think about it. Are we supposed to take out Spider-Man and SpongeBob? They don't have any heroes,” Moradi said. “We have a country in front of us with a large population and a large landmass, but it doesn't have any heroes. All of their heroes are cartoon characters—they're all fictional.”

And so Iran has chosen to speak to Americans in a language it thinks we’ll understand: with cartoons and LEGOs.

Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
mkalus
3 days ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

House Calls

1 Share
I'm pretty old, but even I don’t remember a day when doctors made house calls, like they often do on old television shows. I think that Hollywood must have just made up house calling doctors so as to make things easier for script writers. Because if doctors really made house calls, some cripples would never go anywhere. There are a lot of cripples who really look forward to their doctor appointments because that is their big chance to get out of the house. Hell, some of them might even turn it into a big social event by also having lunch at the hospital food court while they’re at it. I imagine that there used to be a lot more cripples like that back in the days when there weren’t that many transportation options for us. Back then, about the only way that you could get a ride somewhere (if you couldn’t drive and/or couldn’t afford a cripple-accessible vehicle), would probably be to have Medicaid pay for a Medi-car to take you to a doctor appointment. That’s how it used to be here in Chicago. So I would fake like I had a doctor appointment and I’d get dropped off at a hospital or something and watch through the window until I was sure that the Medi-car was good and gone and then I‘d run around all day doing stuff besides going to the doctor. And then I’d hustle back to the hospital before the Medi-car got back to give me a ride home. And that would be my big day out for the week! But nowadays, if a cripple lives in a place where there is abundant public transportation and enough decent weather, all that they have to do is go to the bus stop or train platform if they need a ride somewhere. But some cripples still live lives that revolve around their doctor appointments for a bunch of reasons. If we take that away from them, what else will they have to look forward to? (Please support Smart Ass Cripple and help us keep going. Just click below to contribute.) https://www.paypal.me/smartasscripple?fbclid=IwAR2qrql-UFH19OepgeaCG4WmblyNylb27k2q8eYxXHH
Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
Share this story
Delete

New Martian Writing

1 Share

If you've enjoyed my writing about space over the years, I invite you to subscribe to my new substack newsletter, titled 'Mars for the Rest of Us', where I've been posting weekly essays on topics around Mars exploration.

Here are two recent free posts:

And some recent paywalled ones:

Of course, I will continue writing here on all sorts of topics. But the substack is a way to earn some cheddar while converting five years of Mars notes into what I hope are some interesting and informative short articles.

A monthly subscription to the newsletter costs $5. I do everything I can to make it a good value!

I hope to see longtime readers there!

Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
Share this story
Delete

Roundup: Hallucinating an immigration application

1 Share

There were a couple of immigration stories of note yesterday, the first of which was the revelation that a post-doc researcher at McMaster University—who has a PhD from the Sorbonne—had her permanent residency application rejected because it looks like the immigration department used generative digital asbestos to process the claim and it hallucinated a bunch of things about her job. Worse, while there was a disclaimer about the use of said digital asbestos, it said that a human verified it, which someone clearly did not. This is outrageous, and exactly the kind of thing that some of us were warning about when Mark Carney and Evan Solomon crowed about how great this digital asbestos was going to be for the productivity and efficiency of the civil service. Clearly that’s not the case, and now they not only need to redo her application, but it demonstrates what most of us knew was going to happen—that the humans were going to start cutting corners and not verifying the work because there is a belief in the infallibility of these programmes. This is scandalous and worthy of a resignation if we actually believed in that anymore.

The other story was that justice minister Sean Fraser says that when he was immigration minister, he would have handled things differently with the student visas, but there is one thing that is buried in the piece that everyone is going to overlook:

However, he also said the federal government was negotiating as part of “a good-faith relationship with the provinces who were requesting additional access to immigration programs at the time.”

He said those negotiations failed, leading to the federal government placing a cap in January 2024.

The provinces are very much to blame, but they keep avoiding responsibility. They were screaming for more immigrants and temporary foreign workers. They allowed these strip mall colleges to run rampant—Ontario most especially. Not one of them did anything at all about building more housing, or not keeping their healthcare system from collapsing, and not one of them stopped from the blame pile-on with the federal government. I keep making this point because nobody wants to listen—we have a problem with the provinces, and nobody wants to acknowledge it so that we can start holding the premiers accountable.

Could Carney possibly stop using Nigel Farage's framing? Why is it so hard to learn the lesson of not giving the far-right any ammunition? FFS.

Dale Smith (@journodale.bsky.social) 2026-03-26T03:14:49.967Z

Ukraine Dispatch

Russian attacks on Kharkiv killed two, and damaged Danube port infrastructure in Izamil. It has been calculated that Russia has lost some 40 percent of its oil export capacity thanks to Ukrainian attacks.

Good reads:

  • Mark Carney said that he’s “very disappointed” in the CEO of Air Canada after his English-only condolence message (for what that matters).
  • Jill McKnight says there will be an independent investigation of the veterans rehabilitation programme and the company contracted to run it.
  • The hate crime legislation passed the House of Commons with the religious exemption provision removed from the law thanks to the Bloc’s amendment.
  • The head of the Canadian Army wants you to know that it’s not a “spending spree” and that they are being responsible in procurements.
  • The Ethics Commissioner released the data on MPs’ sponsored travel in 2025, and it was sharply down, particularly because there were so few trips to Israel.
  • RCMP Commissioner Mike Duheme is expressing “sincere regret” for the 1970s surveillance operation of First Nations leadership.
  • PolySeSouvient is lambasting the federal government for the poor roll-out of the gun buyback programme, which is well below its target.
  • Wealthsimple got permission to engage in prediction market trading in Canada, which is insane if you look at how that has enabled corruption in the US.
  • The outgoing chair of the Young Liberals wants his potential successors to promise not to take PMO jobs (because the grassroots needs an accountability role).
  • Mandatory voting will be one of the policy resolutions that the NDP will be debating at this weekend’s convention.
  • Doug Ford made an agreement with Carney for a one-year break on the HST on all new home purchases under $1 million (Can one year stimulate construction?)
  • An agreement in principle has been reached with Alberta on methane reductions under the MOU (and I await word as to how much they are watered down).
  • Lindsay Tedds points out that the “Alberta Advantage” of low taxes is offset by levies on property taxes akin to the Texas model (and that’s not good).
  • Althia Raj worries that a diminished NDP is losing its ability to hold the governing Liberals accountable as they have in the past. (I may quibble a bit there).

Odds and ends:

My Loonie Politics Quick Take looks at the process to appoint the new PBO, and why the Conservatives are promising shenanigans.

Want more Routine Proceedings? Become a patron and get exclusive new content.

Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories