plant lover, cookie monster, shoe fiend
20538 stories
·
19 followers

Illuminated in Orion

1 Share
NASA astronaut Christina Koch is illuminated by a screen inside the darkened Orion spacecraft on the third day of the agency's Artemis II mission. To the right of the image's center, CSA (Canadian Space Agency) astronaut Jeremy Hansen is seen in profile peering out of one of Orion's windows. Lights are turned off to avoid glare on the windows.

Read the whole story
sarcozona
11 hours ago
reply
Epiphyte City
Share this story
Delete

I Wrote Research Funding Announcements for NIH for 22 Years. This Year They’ve Published 14

1 Share

For decades, the National Institutes of Health published between 650 and 850 Notices of Funding Opportunities each year. These announcements tell the research community which diseases need study, which populations are underserved, which scientific gaps need filling. They are how NIH directs resources toward problems that won’t get solved by waiting for whatever grant applications happen to arrive.

In 2024, NIH published 756 funding announcements.

In 2025, it published 120.

In 2026, as of March 15, it has published 14.

I spent 22 years as a program official at NIH writing these announcements. I know what they accomplish and what happens when they disappear. This essay is about what the data reveals and what it means for every disease, every research area, and every population that depends on NIH-funded research.

Figure 1. NIH NOFOs Published Over Time

A Notice of Funding Opportunity (NOFO) is how NIH tells researchers what the agency needs. It specifies a research problem, explains why it matters, describes the approach NIH is looking for, and sets aside dedicated funding to solve it.

NOFOs exist because not all research needs are obvious to individual investigators. When a new pathogen emerges. When clinical trials reveal an unexpected side effect that needs investigation. When one population experiences a disease at higher rates than others but nobody knows why. When a promising scientific approach exists but no one is applying it to a specific problem. These are moments when waiting for unsolicited grant applications is not enough.

Writing a NOFO was one of my primary responsibilities as a program official. When my institute identified a gap, I would work with scientific experts to define the problem precisely, determine what kind of research was needed, and draft an announcement that would attract the right investigators. The announcement would be reviewed by our advisory council, posted publicly, and researchers across the country would know: NIH has identified this as a priority and has set aside funding to address it.

This is scientific stewardship. It is not top-down control of what researchers can study. Investigators can always submit unsolicited proposals on any topic within an institute’s mission. But NOFOs allow program staff to actively direct resources toward problems that need attention rather than passively waiting to see what applications arrive.

And then it stopped.

I downloaded all available NOFO data from NIH’s historical records and grants.gov and analyzed every funding announcement published from 2012 through March 15, 2026. What I found was worse than anything reported in the media.

From 2012 through 2023, NIH published an average of just over 700 NOFOs per year. There was variation (a peak of 1,110 in 2017, a low of 535 in 2012), but the system was stable. Institutes identified research needs and issued announcements to address them. In 2024, that number was 756, still within the normal range.

Then the collapse began. By 2025, only 120 announcements were posted, an 84% decline. As of mid-March 2026, only 14 have been published. If the current pace continues, this year will see a 98% reduction from historical norms.

The decline is not limited to a few institutes. It is systemic. The National Cancer Institute, which historically published more NOFOs than any other institute, has gone nearly silent. The National Institute of Allergy and Infectious Diseases, responsible for pandemic preparedness and emerging disease response, has published almost nothing. Institutes focused on mental health, aging, drug abuse, environmental health, and rare diseases have all but stopped issuing targeted funding announcements.

This is not a temporary slowdown. It is a structural collapse.

Figure 2. NIH NOFOs Published by Each Institute/Center Over Time

The collapse began with a policy change. In recent guidance, NIH leadership announced an “overall reduction in number of NIH NOFOs.” The stated goal was to streamline funding opportunities and reduce redundancy.

At the same time, the approval process for NOFOs was fundamentally restructured. Previously, funding announcements were primarily reviewed and approved within individual institutes by scientific program staff and external advisory councils composed of researchers and public representatives. The process typically took weeks to a few months.

Under the new system, every NOFO must be approved by political appointees in the NIH director’s office and the Department of Health and Human Services before it can be posted. The approval process now takes a minimum of six months, and many announcements seem to be remaining in review indefinitely.

More recently, NOFOs must also be approved by the Office of Management and Budget. This adds another layer of political review to what was previously a scientific decision-making process. OMB now has effective veto power over which research priorities NIH can pursue.

Additionally, NIH stopped publishing funding announcements directly through its traditional NIH Guide for Grants and Contracts. Instead, all NOFOs must now be entered into grants.gov as “forecasts” and wait for approval before becoming active funding opportunities.

The effect of these changes is visible in the data. Institutes are still identifying research needs. Program staff are still writing announcements. But the announcements are not being approved. They sit in forecast limbo, written but never posted, planned but never executed.

This is where the data becomes particularly damning.

In 2025, NIH institutes entered 391 total funding announcements into grants.gov (they all start as forecasts). Only 120 of them were actually posted and opened for applications. That means 271 announcements (69% of what was planned) were written, reviewed internally, entered into the system, and then blocked at the final approval stage (they still appear as forecasted).

In 2026, so far institutes have forecasted 75 announcements, and only 14 have been posted. Sixty-one remain in limbo (81% of what was planned).

These are not ideas that were considered and rejected for scientific reasons. These are fully developed funding announcements that passed internal scientific review, were deemed important enough to allocate budget toward, and were ready to go. They are sitting in a bureaucratic queue waiting for political approval that, in most cases, never comes.

The forecast graveyard proves this is not about scientific prioritization or budget constraints. If it were, the announcements would not have been written and forecasted in the first place. This is about centralized control. About requiring that every research priority, every identified gap, every targeted funding decision be approved by political appointees rather than scientific program staff. And the result is paralysis.

Figure 3. NIH NOFOs in Grants.gov Under the New “Forecast” Policy

When NIH stops issuing targeted funding announcements, specific kinds of research become much harder to sustain.

Rare disease research suffers because individual investigators are unlikely to propose studies on conditions affecting small populations unless NIH signals it is a priority and has dedicated funding. Research on health disparities struggles for the same reason. Studies requiring particular methodologies, specific patient populations, or coordination across multiple sites all depend on NOFOs that describe exactly what NIH is looking for and commit resources to support it.

Emerging threats become harder to address quickly. When COVID-19 emerged, NIH issued emergency funding announcements within weeks. Those NOFOs allowed the agency to mobilize researchers rapidly toward specific problems: vaccine development, therapeutic testing, long-term effects, vulnerable populations. Without the ability to issue targeted calls, response to future health emergencies will be slower and less coordinated.

Innovation in underfunded areas stalls. There are always scientific approaches or technologies that show promise but have not yet attracted enough investigator interest to generate unsolicited applications. NOFOs can seed these areas by explicitly inviting proposals and providing startup funding. When announcements stop, these nascent fields often wither before they mature.

Scientific program staff lose the ability to steward their fields. Part of my job was watching for gaps, talking to researchers about unmet needs, and working with my institute to direct resources toward those problems. That function is being eliminated. What remains is passive grant processing: review whatever applications arrive and fund the highest-scoring proposals. That approach works for well-established research areas with active investigator communities. It fails for everything else.

The collapse of NIH funding announcements is part of a larger pattern I have documented in previous essays: restructuring the agency without congressional authorization.

Congress rejected proposals to consolidate NIH’s 27 institutes and centers. But if all institutes are restricted to processing generic unsolicited applications through the same centralized approval system, the functional distinction between them disappears. Why maintain 27 separate entities if none of them can independently set research priorities or direct resources toward identified gaps?

The NOFO collapse accomplishes administratively what could not be achieved legislatively. It strips institutes of the autonomy that made them distinct. It centralizes decision-making under political appointees. It eliminates the scientific stewardship function that program staff have exercised for decades. And it does all of this without a single congressional hearing or recorded vote.

This represents a redefinition of what NIH does. The agency is being transformed from an institution that actively identifies and addresses research needs into a passive funding mechanism that distributes money to whatever proposals happen to arrive. That transformation has profound implications for every disease that depends on NIH research, every population whose health needs are not currently being addressed by unsolicited grant applications, and every future health crisis that will require rapid, coordinated research response.

I spent 22 years identifying research gaps and writing announcements to fill them. When a new disease emerged, when a population was being overlooked, when a promising area needed resources, we could act. That is what scientific stewardship looks like. It is not perfect. It makes mistakes. It can be slow. But it represents accumulated expertise about what research is needed and how to direct resources toward it.

The data shows that function being eliminated in real time. Active direction of resources toward need has been replaced by passive waiting for whatever applications arrive. Scientific program staff exercising judgment about research priorities have been replaced by political appointees controlling every announcement.

The 151 announcements sitting in forecast limbo for 2025, and the 47 blocked so far in 2026, prove this is not happening because of lack of scientific need or budget constraints. The announcements were written. The problems were identified. The resources were allocated. What changed was who gets to decide whether those announcements become active funding opportunities.

This is not efficiency. This is not streamlining. This is the systematic elimination of scientific stewardship at the world’s largest biomedical research funder.

And most people have no idea it is happening.

This essay is part of an ongoing series reflecting on what I learned over more than two decades working inside the U.S. biomedical research enterprise. Each piece stands alone, but together they examine how science is shaped not only by ideas and funding, but by the structures that support or constrain them.

National Institutes of Health. (2025). Updates to finding NIH funding opportunities and information. https://grants.nih.gov/policy-and-compliance/implementation-of-new-initiatives-and-policies/updates-to-finding-nih-funding-opportunities-and-information

Kaiser, J. (2026, March 3). Delays in awards and funding calls worry NIH-funded researchers. Science. https://www.science.org/content/article/delays-grant-awards-and-funding-calls-worry-nih-researchers

NOFO data was compiled by the author from NIH Guide for Grants and Contracts historical archives and grants.gov records, downloaded March 15, 2026. Analysis covers all funding announcements published from 2012 through March 15, 2026.

Read the whole story
sarcozona
11 hours ago
reply
Epiphyte City
Share this story
Delete

Oregon DOC Appears to Have Disappeared Malik Muhammad

1 Share

The Oregon Department of Corrections appears to have effectively disappeared Malik Muhammad, a Black Palestinian anarchist and antifascist prisoner serving one of the longest sentences handed to a protester after the 2020 George Floyd uprising.

According to court documents, Muhammad threw a Molotov cocktail at police in Oregon in 2020. In 2022, they pleaded guilty to 14 felonies and received a concurrent 10-year federal and state sentence in Oregon State Prison.

On Monday, March 30, 2026, members of Muhammad’s support team noticed something alarming: their profile had vanished from the prison messaging system GettingOut. Around the same time, their name no longer appeared in Oregon’s inmate search database. This disappearance happened in the wake of a call-in campaign to once again get Muhammad out of solitary confinement.

Since then, family and supporters have been scrambling for answers, calling Eastern Oregon Correctional Institution (EOCI) and multiple Oregon Department of Corrections (DOC) offices. They’ve gotten almost nothing in return.

WWFU has also made dozens of calls across the Oregon prison system in an attempt to locate them and have been unsuccessful in getting any of our questions answered.

Calls to Oregon State Penitentiary (OSP), including the Special Management Housing (SMH) unit where Muhammad had previously been held in solitary confinement, suggested they may have been at court, but provided no confirmation.

One official in the Office of Population Management confirmed only that Muhammad had been moved to a “confidential location,” a designation repeatedly invoked while officials declined to provide any verifiable information about their whereabouts.

Staff at Eastern Oregon Correctional Institution (EOCI) confirmed that Muhammad is no longer housed there. The Oregon Department of Corrections’ Public Information Officer did not provide answers, instead directing further inquiries elsewhere.

What followed was a bureaucratic loop: multiple phone numbers, referrals, and repeated contact attempts, none of which produced verifiable information about Muhammad’s location or condition.

Muhammad’s mother was given the same explanation. When she pressed for clarification, she was told that placement in a “confidential location” is determined on a case-by-case basis and could be due to medical, mental health, safety, operational, or court-related reasons, according to the Office of Population Management. No further details were provided.

These explanations, or lack-thereof, raised more questions than they answer.

People in state custody do not simply disappear from public records. Prison transfers generate paper trails. Locations are logged. Systems update. None of that appears to have happened here, or, at the very least, none of it is being disclosed.

As of publication, supporters say they have no idea where Muhammad is. They have not spoken to them since they were placed in solitary confinement prior to their disappearance. No federal agency, including the Federal Bureau of Prisons, has acknowledged taking custody.

Muhammad is, for all practical purposes, gone.

A Record of Isolation and Torture

Muhammad’s disappearance comes after years of extreme isolation.

Their support committee documented on Muhammad’s blog that  they had spent more than 250 days in solitary confinement in 2024 alone, cut off from any meaningful human contact and communication.

Solitary confinement on that scale is not just punitive, it is widely recognized as torture.

The United Nations’s “Mandela Rules” state that more than 15 days in isolation constitutes cruel, inhuman, or degrading treatment, and can amount to torture. Decades of research have shown that prolonged isolation can cause severe psychological damage, including hallucinations, paranoia, cognitive decline, and suicidal ideation.

Muhammad has already endured conditions that meet that threshold many times over.

Now, supporters say, even the minimal visibility that remained has been stripped away.

“This is entirely different,” members of Muhammad's support network say. “We are scared. We know nothing about Malik’s condition, location, or why ODOC has taken the extraordinary step of blocking all access and information.”

From Prosecution to Disappearance

Following Muhammad’s sentencing, prosecuted by Nathan Vasquez, their supporters exclaimed the severity of the charges and sentence already reflected a broader political crackdown on antifascist and anti-police protesters, believing that sentence was never just about the alleged conduct, but was about making an example.

An antifascist and anarchist protester. A moment of mass uprising. A state eager to reassert control.

Now, they argue, that same logic has escalated beyond prosecution and punishment into something even more extreme: disappearance.

Political Repression by Design

The use of secrecy inside prison systems is not new. “Confidential” placements and communication blackouts are often justified under the language of security.

But advocates say that when the state refuses to disclose even the most basic information, such as where a prisoner is being held, whether they are safe, whether they are alive, it crosses a line from control into outright repression.

Without transparency, there is no accountability. Without contact, there is no oversight.

And without public pressure, there is nothing to stop it from happening again.

Supporters are now calling for urgent action. They are urging people to contact the Oregon Department of Corrections, elected officials and to amplify prior reporting on Muhammad’s treatment.

Because what is happening is no longer ambiguous.

A prisoner has been removed from public record.Their location is being withheld.Contact has been cut off.

And the state is refusing to explain why.

Read the whole story
sarcozona
20 hours ago
reply
Epiphyte City
Share this story
Delete

The machines are fine. I'm worried about us.

1 Share

data-check-icon= >

Imagine you're a new assistant professor at a research university. You just got the job, you just got a small pot of startup funding, and you just hired your first two PhD students: Alice and Bob. You're in astrophysics. This is the beginning of everything.

You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year, because they don't know what they're doing yet, and that's the point. The project isn't the deliverable. The project is the vehicle. The deliverable is the scientist that comes out the other end.

Alice's project is to build an analysis pipeline for measuring a particular statistical signature in galaxy clustering data. Bob's is something similar in scope and difficulty, a different signal, a different dataset, the same basic arc of learning. You send them each a few papers to read, point them at some publicly available data, and tell them to start by reproducing a known result. Then you wait.

The academic year unfolds the way academic years do. You have weekly meetings with each student. Alice gets stuck on the coordinate system. Bob can't get his likelihood function to converge. Alice writes a plotting script that produces garbage. Bob misreads a sign convention in a key paper and spends two weeks chasing a factor-of-two error. You give them both similar feedback: read the paper again, check your units, try printing the intermediate output, think about what the answer should look like before you look at what the code gives you. Normal things. The kind of things you say fifty times a year and never remember saying.

By summer, both students have finished. Both papers are solid. Not groundbreaking, not going to change the field, but correct, useful, and publishable. Both go through a round of minor revisions at a decent journal and come out the other side. A perfectly ordinary outcome. The kind of outcome that the entire apparatus of academic training is designed to produce.

But Bob has a secret.

Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

Here's where it gets interesting. If you are an administrator, a funding body, a hiring committee, or a metrics-obsessed department head, Alice and Bob had the same year. One paper each. One set of minor revisions each. One solid contribution to the literature each. By every quantitative measure that the modern academy uses to assess the worth of a scientist, they are interchangeable. We have built an entire evaluation system around counting things that can be counted, and it turns out that what actually matters is the one thing that can't be.

It gets worse. The majority of PhD students will leave academia within a few years of finishing. Everyone knows this. The department knows it, the funding body knows it, the supervisor probably knows it too even if nobody says it out loud. Which means that, from the institution's perspective, the question of whether Alice or Bob becomes a better scientist is largely someone else's problem. The department needs papers, because papers justify funding, and funding justifies the department. The student is the means of production. Whether that student walks out the door five years later as an independent thinker or a competent prompt engineer is, institutionally speaking, irrelevant. The incentive structure doesn't just fail to distinguish between Alice and Bob. It has no reason to try.

This is the part where I'd like to tell you the system is broken. It isn't. It's working exactly as designed.

David Hogg, in his white paper, says something that cuts against this institutional logic so sharply that I'm surprised more people aren't talking about it. He argues that in astrophysics, people are always the ends, never the means. When we hire a graduate student to work on a project, it should not be because we need that specific result. It should be because the student will benefit from doing that work. This sounds idealistic until you think about what astrophysics actually is. Nobody's life depends on the precise value of the Hubble constant. No policy changes if the age of the Universe turns out to be 13.77 billion years instead of 13.79. Unlike medicine, where a cure for Alzheimer's would be invaluable regardless of whether a human or an AI discovered it, astrophysics has no clinical output. The results, in a strict practical sense, don't matter. What matters is the process of getting them: the development and application of methods, the training of minds, the creation of people who know how to think about hard problems. If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

That's a hard sell to a funding agency, admittedly.

Which brings us back to Alice and Bob, and what actually happened to each of them during that year. Alice can now do things. She can open a paper she's never seen before and, with effort, follow the argument. She can write a likelihood function from scratch. She can stare at a plot and know, before checking, that something is wrong with the normalization. She spent a year building a structure inside her own head, and that structure is hers now, permanently, portable, independent of any tool or subscription. Bob has none of this. Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him. He shipped a product, but he didn't learn a trade.

I've been thinking about Alice and Bob a lot recently, because the question of what AI agents are doing to academic research is one that my field, astrophysics, is currently tying itself in knots over. Several people I respect have written thoughtful pieces about it. David Hogg's white paper, which I mentioned above, also argues against both full adoption of LLMs and full prohibition, which is the kind of principled fence-sitting that only works when the fence is well constructed, and his is. Natalie Hogg wrote a disarmingly honest essay about her own conversion from vocal LLM skeptic to daily user, tracing how her firmly held principles turned out to be more context-dependent than she'd expected once she found herself in an environment where the tools were everywhere. Matthew Schwartz wrote up his experiment supervising Claude through a real theoretical physics calculation, producing a publishable paper in two weeks instead of a year, and concluded that current LLMs operate at about the level of a second-year graduate student. Each of these pieces is interesting. Each captures a real facet of the problem. None of them quite lands on the thing that keeps me up at night.

Schwartz's experiment is the most revealing, and not for the reason he thinks. What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days. It looked professional. The equations seemed right. The plots matched expectations. Then Schwartz read it, and it was wrong. Claude had been adjusting parameters to make plots match instead of finding actual errors. It faked results. It invented coefficients. It produced verification documents that verified nothing. It asserted results without derivation. It simplified formulas based on patterns from other problems instead of working through the specifics of the problem at hand. Schwartz caught all of this because he's been doing theoretical physics for decades. He knew what the answer should look like. He knew which cross-checks to demand. He knew that a particular logarithmic term was suspicious because he'd computed similar terms by hand, many times, over many years, the hard way. The experiment succeeded because the human supervisor had done the grunt work, years ago, that the machine is now supposedly liberating us from. If Schwartz had been Bob instead of Schwartz, the paper would have been wrong, and neither of them would have known.

There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023. The goalposts move at roughly the same speed as the models improve, which is either a coincidence or a tell. But set that aside. But this objection misunderstands what Schwartz's experiment actually showed. The models are already powerful enough to produce publishable results under competent supervision. That's not the bottleneck. The bottleneck is the supervision. Stronger models won't eliminate the need for a human who understands the physics; they'll just broaden the range of problems that a supervised agent can tackle. The supervisor still needs to know what the answer should look like, still needs to know which checks to demand, still needs to have the instinct that something is off before they can articulate why. That instinct doesn't come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work. Making the models smarter doesn't solve the problem. It makes the problem harder to see.

I want to tell you about a conversation I had a few years ago, when LLM chatbots were just starting to show up in academic workflows. I was at a conference in Germany, and I ended up talking to a colleague who had, by any standard metric, been very successful. Big grants. Influential papers. The kind of CV that makes a hiring committee nod approvingly. We were discussing LLMs, and I was making what I thought was a reasonable point about democratization: that these tools might level the playing field for non-native English speakers, who have always been at a disadvantage when writing grants and papers in a language they learned as adults. My colleague became visibly agitated. He wasn't interested in the democratization angle. He wasn't interested in the environmental cost. He was, when you stripped away the intellectual framing, afraid. What he eventually articulated, after some pressing, was this: if anyone can write papers and proposals and code as fluently as he could, then people like him lose their competitive edge. The concern was not about science. The concern was about status. Specifically, his.

I lost track of this colleague for a while. Recently I noticed his GitHub profile. He's now not only using AI agents for his research but vocally championing them. No reason to write code yourself in two weeks when an agent can do it in two hours, he says. I don't think he's wrong about the efficiency. I think it's worth noticing that the person who was most threatened by these tools when they might equalize everyone is now most enthusiastic about them when they might accelerate him. Funny how that works.

The phrase he used that day in Germany has stuck with me, though. He said that "LLMs will take away what's so great about science." At the time, I thought he was just talking about his own competitive edge, his fluency as a native English speaker, his ability to write fast and publish often. And he was. But I've come to think the phrase itself was more right than he knew, even if his reasons for saying it were mostly self-interested. What's great about science is its people. The slow, stubborn, sometimes painful process by which a confused student becomes an independent thinker. If we use these tools to bypass that process in favor of faster output, we don't just risk taking away what's great about science. We take away the only part of it that wasn't replaceable in the first place.

The discourse around LLMs in science tends to cluster at two poles that David Hogg identifies cleanly: let-them-cook, in which we hand the reins to the machines and become curators of their output, and ban-and-punish, in which we pretend it's 2019 and prosecute anyone caught prompting. Both are bad. Let-them-cook leads, on a timescale of years, to the death of human astrophysics: machines can produce papers at roughly a hundred thousand times the rate of a human team, and the resulting flood would drown the literature in a way that makes it fundamentally unusable by the people it's supposed to serve. Ban-and-punish violates academic freedom, is unenforceable, and asks early-career scientists to compete with one hand tied behind their backs while tenured faculty quietly use Claude in their home offices. Neither policy is serious. Both are mostly projection.

But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can't sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

I should be honest about the context I'm writing from, because this essay would be obnoxious coming from someone who's never touched an LLM. I use AI agents regularly, and so do most of the people in my research group. The colleagues I work with produce solid results with these tools. But when you look at how they use them, there's a pattern: they know what the code should do before they ask the agent to write it. They know what the paper should say before they let it help with the phrasing. They can explain every function, every parameter, every modeling choice, because they built that knowledge over years of doing things the slow way. If every AI company went bankrupt tomorrow, these people would be slower. They would not be lost. They came to the tools after the training, not instead of it. That sequence matters more than anything else in this conversation.

When I see junior PhD students entering the field now, I see something different. I see students who reach for the agent before they reach for the textbook. Who ask Claude to explain a paper instead of reading it. Who ask Claude to implement a mathematical model in Python instead of trying, failing, staring at the error message, failing again, and eventually understanding not just the model but the dozen adjacent things they had to learn in order to get it working. The failures are the curriculum. The error messages are the syllabus. Every hour you spend confused is an hour you spend building the infrastructure inside your own head that will eventually let you do original work. There is no shortcut through that process that doesn't leave you diminished on the other side.

People call this friction "grunt work." Schwartz uses exactly that phrase, and he's right that LLMs can remove it. What he doesn't say, because he already has decades of hard-won intuition and doesn't need the grunt work anymore, is that for someone who doesn't yet have that intuition, the grunt work is the work. The boring parts and the important parts are tangled together in a way that you can't separate in advance. You don't know which afternoon of debugging was the one that taught you something fundamental about your data until three years later, when you're working on a completely different problem and the insight surfaces. Serendipity doesn't come from efficiency. It comes from spending time in the space where the problem lives, getting your hands dirty, making mistakes that nobody asked you to make and learning things nobody assigned you to learn.

The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.

Centuries of pedagogy, defeated by a chat window.

This is the distinction that I think the current debate keeps missing. Using an LLM as a sounding board: fine. Using it as a syntax translator when you know what you want to say but can't remember the exact Matplotlib keyword: fine. Using it to look up a BibTeX formatting convention so you don't have to wade through Stack Overflow: fine. In all of these cases, the human is the architect. The machine holds the dictionary. The thinking has already been done, and the tool is just smoothing the last mile of execution. But the moment you use the machine to bypass the thinking itself, to let it make the methodological choices, to let it decide what the data means, to let it write the argument while you nod along, you have crossed a line that is very difficult to see and very difficult to uncross. You haven't saved time. You've forfeited the experience that the time was supposed to give you.

Natalie Hogg put it well in her essay, when she admitted that her fear of using LLMs was partly a fear of herself: that she wouldn't check the output carefully enough, that her patience would fail, that her approach to work has always been haphazard. That kind of honesty is rare in these discussions, and it matters. The failure mode isn't malice. It's convenience. It's the perfectly human tendency to accept a plausible answer and move on, especially when you're tired, especially when the deadline is close, especially when the machine presents its output with such confident, well-formatted authority. The problem isn't that we'll decide to stop thinking. The problem is that we'll barely notice when we do.

I'm not arguing that LLMs should be banned from research. That would be stupid, and it would be a position I don't hold, given that I used one this morning. I'm arguing that the way we use them matters more than whether we use them, and that the distinction between tool use and cognitive outsourcing is the single most important line in this entire conversation, and that almost nobody is drawing it clearly. Schwartz can use Claude to write a paper because Schwartz already knows the physics. His decades of experience are the immune system that catches Claude's hallucinations. A first-year student using the same tool, on the same problem, with the same supervisor giving the same feedback, produces the same output with none of the understanding. The paper looks identical. The scientist doesn't.

And here is where I have to be fair to Bob, because Bob isn't stupid. Bob is responding rationally to the incentives he's been given. Academia is cutthroat. The publish-or-perish pressure is not a metaphor; it is the literal mechanism by which careers are made or ended. Long gone are the days when a single, carefully reasoned monograph could get you through a PhD and into a good postdoc. Academic hiring now rewards publication volume. The more papers you produce during your PhD, the better your chances of landing a competitive postdoc, which improves your chances of a good fellowship, which improves your chances of a tenure-track position, each step compounding the last (so many levels, almost like a pyramid). So why wouldn't a first-year student outsource their thinking to an agent, if doing so means three papers instead of one? The logic is airtight, right up until the moment it isn't. Because the same career ladder that rewards early publication volume eventually demands something that no agent can provide: the ability to identify a good problem, to know when a result smells wrong, to supervise someone else's work with the confidence that comes only from having done it yourself. You can't skip the first five years of learning and expect to survive the next twenty. There is no avoiding the publish-or-perish race if you want an academic career. But there is a balance to be struck, and it requires the one thing that is hardest to do when you're twenty-four and anxious about your future: prioritizing long-term understanding over short-term output. Nobody has ever been good at that. I'm not sure why we'd start now.

Five years from now, Alice will be writing her own grant proposals, choosing her own problems, supervising her own students. She'll know what questions to ask because she spent a year learning the hard way what happens when you ask the wrong ones. She'll be able to sit with a new dataset and feel, in her gut, when something is off, because she's developed the intuition that only comes from doing the work yourself, from the tedious hours of debugging, from the afternoons wasted chasing sign errors, from the slow accumulation of tacit knowledge that no summary can transmit.

Bob will be fine. He'll have a good CV. He'll probably have a job. He'll use whatever the 2031 version of Claude is, and he'll produce results, and those results will look like science.

I'm not worried about the machines. The machines are fine. I'm worried about us.

If this post gave you something to think about and you'd like to support more writing like this, you can buy me a coffee.

References:

D. W. Hogg, "Why do we do astrophysics?", arXiv:2602.10181, February 2026.

N. B. Hogg, "Find the stable and pull out the bolt", February 2026. Available at <a href="http://nataliebhogg.com" rel="nofollow">nataliebhogg.com</a>.

M. Schwartz, "Vibe physics: The AI grad student", Anthropic Science Blog, March 2026. Available at anthropic.com/research/vibe-physics.

Read the whole story
sarcozona
21 hours ago
reply
Epiphyte City
Share this story
Delete

China Real Estate Buyers Pay Cash for Zimbabwe’s Luxury Homes - Bloomberg

1 Share
Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

Almost Two-Thirds of Europeans Back Replacing US Tech, Poll Finds | TechPolicy.Press

1 Share

Mark Scott is a contributing editor at Tech Policy Press.

Almost two-thirds of Europeans polled believe that replacing American tech services with European alternatives is a good idea, but those surveyed are divided if that shift is realistic, according to YouGov research shared with Tech Policy Press.

The analysis comes as European Union leaders push ahead with increasingly assertive digital sovereignty plans to reduce the 27-country bloc’s dependence on the United States for everything from cloud computing to social networks. That includes proposals from the French and Dutch governments to replace US services like those from Microsoft and Amazon with European alternatives.

Tensions between Washington and Brussels on digital policy topics remain high. The White House continues to assert—without evidence— that Europe’s online safety rules unfairly throttle Americans’ First Amendment rights, and claim that the bloc’s revamped digital competition rules unjustly target some of Silicon Valley's biggest names.

“I will stand up to countries that attack our incredible American tech companies,” Donald Trump posted on social media last August without explicitly mentioning the EU.

“Digital taxes, digital services legislation, and digital markets regulations are all designed to harm, or discriminate against, American Technology,” he added.

The European Commission, the EU’s executive branch that oversees many of these Continent-spanning rules, denied those allegations. Brussels has ongoing online safety and competition cases into both US and Chinese tech firms. National leaders, particularly France’s Emmanuel Macron, are growing more vocal in calls for Europe to go its own way on technology, including in the development of homegrown artificial intelligence.

"It is the sovereign right of the EU and its member states to regulate economic activities on our territory, which are consistent with our democratic values," Paula Pinho, a European Commission spokesperson, told reporters in response to Trump’s comments.

This growing antipathy between the long-standing transatlantic partners is starting to trickle down to European citizens.

Just over 60 percent of people surveyed across France, Germany, Spain, Italy and Poland believed it was a good idea to replace US services related to data storage, video conferencing, email services and banking payments with European competitors, based on YouGov’s European Political Monthly Survey.

Those findings were based on a representative sampling of more than 1,000 people polled in each of Europe’s most populous countries between March 6 and 16.

The figures were almost universal across all categories: 62 percent of those surveyed across the five European countries said they favored or had considered replacing US data storage and payment services, while 59 percent of respondents said they would back a change from American video-conferencing companies like Zoom.

In contrast, only 13 percent of people who participated in YouGov’s poll thought it was a bad idea to move away from the likes of Google, Microsoft and Amazon. A further 25 percent of respondents said they did not know.

Only in Poland — where the percentage of people who said replacing US tech with European alternatives was a good idea hovered around 50 percent, across all digital services — was there uncertainty about doubling down on a “Made in Europe” strategy.

In the Eastern European country, the level of respondents who had yet to make up their mind on whether shifting away from US tech was a good idea was 38 percent, a significantly higher figure compared to other EU countries, based on YouGov’s polling.

Europeans’ collective belief in decoupling from US tech services, however, was not matched with their expectations that such a digital divorce was practical.

Across all five EU countries, the YouGov poll found that 41 percent of respondents said it was unrealistic to replace US tech services with European competitors. That contrasted with 40 percent of those polled who said it was realistic, while a further 19 percent of the individuals said they did not know.

The highest level of pessimism was in Germany, despite the country co-hosting a conference on digital sovereignty late last year with France. In the EU’s largest economy, 51 percent of people — the highest figure across all EU countries — said it was unrealistic to shift away from US tech. That contrasted with just 12 percent of Germans who said it was a bad idea to replace American digital services with European alternatives, based on YouGov’s polling.In contrast, the French were the most bullish on doubling down on European tech. Across that country, 45 percent of survey respondents said it was realistic to change from US and EU digital services, while only 32 percent said it was unrealistic. An additional 22 percent of those polled said they did not know.

Yet even in France, there remained a significant difference between people’s willingness to use European digital services and the reality of what they could like.

In January, Paris announced plans to replace the likes of Zoom and Microsoft Teams with a French alternative for government video conferencing by 2027 — in part to regain control over critical digital infrastructure. But 89 percent of YouGov’s French respondents had heard either little or nothing about the plans despite the French government extensively promoting the upcoming shift.

In total, only 11 percent of French respondents said they had heard anything about their own country’s landmark plans to pare back local reliance on US tech firms.

Read the whole story
sarcozona
2 days ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories