plant lover, cookie monster, shoe fiend
20200 stories
·
20 followers

Trump science funding cuts shake the foundation of U.S. research | STAT

1 Share

For a substantial group of U.S. researchers, 2025 will be remembered as the year their path to a career in science was closed off, their dreams dashed. For others, it will go down as a chaotic game of red-light-green-light that left them constantly unsure of what work would be funded or halted, but that they managed to survive. For nearly everyone, the last 10 months have revealed that the research enterprise that catapulted the country to the technological fore was much more brittle than expected. 

Sure, the courts have stepped in to restore billions of dollars in terminated grant funding to colleges and universities. Yes, the National Institutes of Health, despite layoffs and seemingly endless hurdles, managed to spend its entire budget for the fiscal year. And Congress, in a rare rebuke to the president, has so far refused steep cuts to the NIH budget in 2026 as well as a White House plan to consolidate its 27 institutes. But in the larger scheme of things, the Trump administration has, with shocking speed, ripped up the longstanding social contract that existed between scientists and the federal government. 

Since the inception of the modern research ecosystem after World War II, NIH budgets have largely gone up and up, supporting a steady expansion of biomedical research at American universities. But that began to reverse this year, according to a STAT analysis of almost 750,000 grants from the past 10 years in the NIH RePORTER database. While the agency distributed a similar amount of grant money as in recent years, the number of awards made from January through the end of the fiscal year in September (right before the government shutdown) dropped 11.6% in comparison to the same period last year, and 8.2% compared to the average for the same months in the previous nine years. 

The shift cut across the agency’s $37 billion extramural portfolio, largely driven by a switch to paying for many multiyear grants entirely up front — which left money for fewer projects. The change affected vaccine research and work investigating health disparities — both targets of ire from the current administration. But it also has reduced the number of new studies looking into cancer, Alzheimer’s disease, and HIV/AIDS, all areas that generally have bipartisan support, according to STAT’s analysis of grants approved by scientific review panels for those disease areas. 


There has also been a drop in high-risk, high-reward grants, which are intended to incentivize scientists to take big swings on creative ideas. Jay Bhattacharya, the current NIH director, previously studied the agency’s ability to fund novel ideas, and in his Senate confirmation hearing, he said his “plan is to ensure that the NIH invests in cutting-edge research in every field to make big advances rather than just small, incremental progress over years and sometimes decades.” 

But grants in that vein, too, have seen cuts. In the first nine months of 2024, the NIH funded 406 high-risk, high-reward grants; in 2025, that fell to 364. 


Even on campuses that weren’t directly targeted by the administration as alleged hotbeds of “woke” thinking or antisemitism, that didn’t lose hundreds of millions in grants overnight, the volatile funding climate has forced academic institutions into a defensive crouch — freezing hiring, laying off staff, and scaling back graduate training programs. After years of steady growth, enrollments in Ph.D. programs in the life and biomedical sciences flatlined in the fall of 2025, according to initial data from the National Student Clearinghouse Research Center. 

This came on top of another blow for trainees: The STAT analysis of NIH data shows that the number of early-career grant awards — given to graduate students and postdoctoral researchers — fell this year to its lowest point since 2016. 


Arguably the most insidious fallout, though, is that many scientists who work at universities no longer feel they can count on the U.S. government as a reliable partner in the pursuit of research for the public good. “That’s the most devastating part of all this,” one NIH official told STAT. “Why would anyone trust the NIH ever again?” 

“That social compact is being systematically undermined at the moment by a group of ideologues whose real target is not science; its real target is what they perceive to be the power and the arrogance of elite institutions, starting with the great research universities of this country,” said Shirley Tilghman, a molecular biologist and former president of Princeton — one of those universities. To onlookers like Tilghman, what has happened since January seems to be a tragedy of unintended consequences. “The intention was to punish elite universities, it was not to destroy the scientific capacity of the United States, but that’s what they’re doing,” she said. “It’s one thing to destroy something. It is quite another to destroy it and have nothing to replace it with. I think that’s the moment we’re in.” 

But the men charged with leading the administration’s science policy paint a very different picture. They see this moment as an opportunity to bring about long-sought reforms to the infrastructure underlying how federal dollars are doled out to universities and their scientists. Outside of the administration, some scientific leaders see potential for the unprecedented shake-up to melt away the inertia that has stymied change at the federal science agencies. 

To better understand this historic inflection point, STAT interviewed more than two dozen biomedical researchers, science policy experts, historians of science, and current and former federal health officials, including four former NIH directors. (Most spoke openly, but some requested anonymity to speak freely.) They expressed a range of views about how the administration’s actions have eroded the partnership of government and academia, what it means for the future of health and science in America, and what a recovery or a reimagining of the system might look like. The only thing they all agreed on is that there’s no going back.

“Whatever comes next is never going to be what it used to be,” said Larry Tabak, who served as the NIH’s principal deputy director from 2010 until he was pushed into early retirement in February. “The genie is out of the bottle.”

STAT+

Already have an account? Log in

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe
Read the whole story
sarcozona
2 hours ago
reply
Epiphyte City
Share this story
Delete

AI is Destroying the University and Learning Itself

1 Share

I used to think that the hype surrounding artificial intelligence was just that—hype. I was skeptical when ChatGPT made its debut. The media frenzy, the breathless proclamations of a new era—it all felt familiar. I assumed it would blow over like every tech fad before it.
I was wrong. But not in the way you might think.

  The panic came first. Faculty meetings erupted in dread: “How will we detect plagiarism now?" “Is this the end of the college essay?” “Should we go back to blue books and proctored exams?” My business school colleagues suddenly behaved as if cheating had just been invented.

  Then, almost overnight, the hand-wringing turned into hand-rubbing. The same professors forecasting academic doom were now giddily rebranding themselves as “AI-ready educators.” Across campus, workshops like “Building AI Skills and Knowledge in the Classroom” and “AI Literacy Essentials” popped up like mushrooms after rain. The initial panic about plagiarism gave way to a resigned embrace: “If you can’t beat ‘em, join ‘em.”

  This about-face wasn’t unique to my campus. The California State University (CSU) system—America’s largest public university system with 23 campuses and nearly half a million students—went all-in, announcing a $17 million partnership with OpenAI. CSU would become the nation’s first “AI-Empowered” university system, offering free ChatGPT Edu (a campus-branded version designed for educational institutions) to every student and employee. The press release gushed about “personalized, future-focused learning tools” and preparing students for an “AI-driven economy.”

  The timing was surreal. CSU unveiled its grand technological gesture just as it proposed slashing $375 million from its budget. While administrators cut ribbons on their AI initiative, they were also cutting faculty positions, entire academic programs, and student services. At CSU East Bay, general layoff notices were issued twice within a year, hitting departments like General Studies and Modern Languages. My own alma mater, Sonoma State, faced a $24 million deficit and announced plans to eliminate 23 academic programs—including philosophy, economics, and physics—and to cut over 130 faculty positions, more than a quarter of its teaching staff.

  At San Francisco State University, the provost’s office formally notified our union, the California Faculty Association (CFA) of potential layoffs—an announcement that sent shockwaves through campus as faculty tried to reconcile budget cuts with the administration’s AI enthusiasm. The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the gospel of automated learning.

  The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education—it’s outsourcing it, paying premium prices for a chatbot many students were already using for free.

For Sale: Critical Education

Public education has been for sale for decades. Cultural theorist Henry Giroux was among the first to see how public universities were being remade as vocational feeders for private markets. Academic departments now have to justify themselves in the language of revenue, “deliverables,” and “learning outcomes.” CSU’s new partnership with OpenAI is the latest turn of that screw.

Others have traced the same drift. Sheila Slaughter and Gary Rhoades called it academic capitalism: knowledge refashioned as commodity and students as consumers. In Unmaking the Public University, Christopher Newfield showed how privatization actually impoverishes public universities, turning them into debt-financed shells of themselves. Benjamin Ginsberg chronicled the rise of the “all-administrative campus,” where managerial layers and administrative blight multiplied even as faculty shrink. And Martha Nussbaum warned what’s lost when the humanities—those spaces for imagination and civic reflection—are treated as expendable in a democracy. Together they describe a university that no longer asks what education is for, only what it can earn.

  The California State University system has now written the next chapter of that story. Facing deficits and enrollment declines, administrators embraced the rhetoric of AI-innovation as if it were salvation. When CSU Chancellor Mildred Garcia announced the $17-million partnership with OpenAI, the press release promised a “highly collaborative public-private initiative” that would “elevate our students’ educational experience” and “drive California’s AI-powered economy.” This corporate-speak reads like a press release ChatGPT could have written. 

Meanwhile, at San Francisco State, entire graduate programs devoted to critical inquiry—Women and Gender Studies and Anthropology—were being suspended due to lack of funding. But not to worry: everyone got a free ChatGPT Edu license!

Professor Martha Kenney, Chair of the Women and Gender Studies department and Principal Investigator on a National Science Foundation grant examining AI’s social justice impacts, saw the contradiction firsthand. Shortly after the CSU announcement, she co-authored a San Francisco Chronicle op-ed with Anthropology Professor Martha Lincoln, warning that the new initiative risked shortchanging students and undermining critical thinking.

  “I’m not a Luddite,” Kenney wrote. “But we need to be asking critical questions about what AI is doing to education, labor, and democracy—questions that my department is uniquely qualified to explore.”

  The irony couldn’t be starker: the very programs best equipped to study the social and ethical implications of AI were being defunded, even as the university promoted the use of OpenAI’s products across campus. 

  This isn’t innovation—it’s institutional auto-cannibalism.

The new mission statement? Optimization. Inside the institution, the corporate idiom trickles down through administrative memos and patronizing emails. Under the guise of “fiscal sustainability” (a friendlier way of saying “cuts”), administrators sharpen their scalpels to restructure the university in accordance with efficiency metrics instead of educational purpose.

The messaging from administrators would be comical if it weren’t so cynical. Before summer break at San Francisco State, a university administrator warned faculty in an email of potential layoffs, hedging with the lines: “We hope to avoid layoffs,” and “No decisions have been made.” Weeks later came her chirpy summer send-off: “I hope you are enjoying the last day to turn in grades. You may even be reading the novel you never finished from winter break...”

  Right, because nothing says leisure reading like looming unemployment. Then came the kicker: “If we continue doing the work above to reduce expenses while still maintaining access for students, we do not anticipate having to do layoffs.” Translation: Sacrifice your workloads, your job security, even your colleagues, maybe we’ll let you keep your job. No promises. Now go enjoy that novel.

Technopoly Comes to Campus

When my business school colleagues insist that ChatGPT is “just another tool in the toolbox,” I’m tempted to remind them that Facebook was once “just a way to connect with friends.” But there’s a difference between tools and technologies. Tools help us accomplish tasks; technologies reshape the very environments in which we think, work, and relate. As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth. 

Media theorist Neil Postman warned that a “technopoly” arises when societies surrender judgment to technological imperatives—when efficiency and innovation become moral goods in themselves. Once metrics like speed and optimization replace reflection and dialogue, education mutates into logistics: grading automated, essays generated in seconds. Knowledge becomes data; teaching becomes delivery. What disappears are precious human capacities—curiosity, discernment, presence. The result isn’t augmented intelligence but simulated learning: a paint-by-numbers approach to thought.

Political theorist Langdon Winner once asked whether artifacts can have politics. They can, and AI systems are no exception. They encode assumptions about what counts as intelligence and whose labor counts as valuable. The more we rely on algorithms, the more we normalize their values: automation, prediction, standardization, and corporate dependency. Eventually these priorities fade from view and come to seem natural—“just the way things are.” 

In classrooms today, the technopoly is thriving. Universities are being retrofitted as fulfillment centers of cognitive convenience. Students aren’t being taught to think more deeply but to prompt more effectively. We are exporting the very labor of teaching and learning—the slow work of wrestling with ideas, the enduring of discomfort, doubt and confusion, the struggle of finding one’s own voice. Critical pedagogy is out; productivity hacks are in. What’s sold as innovation is really surrender. As the university trades its teaching mission for “AI-tech integration,” it doesn’t just risk irrelevance—it risks becoming mechanically soulless. Genuine intellectual struggle has become too expensive of a value proposition. 

The scandal is not one of ignorance but indifference. University administrators understand exactly what’s happening, and proceed anyway. As long as enrollment numbers hold and tuition checks clear, they turn a blind eye to the learning crisis while faculty are left to manage the educational carnage in their classrooms. 

The future of education has already arrived, as a liquidation sale of everything that once made it matter. 

Art by Emily Altman from Current Affairs Magazine, Issue 56, October-November 2025

The Cheating-AI Technology Complex

Before AI arrived, I used to joke with colleagues about plagiarism. “Too bad there isn’t an AI app that can grade their plagiarized essays for us,” I’d say, half in jest. Students have always found ways to cheat—scribbling answers on their palms, sending exams to Chegg.com, hiring ghostwriters—but ChatGPT took it to another level. Suddenly they had access to a writing assistant that never slept, never charged, and never said no.

Universities scrambled to fight back with AI-detectors like Turnitindespite high rates of false positives, documented bias against ESL and Black students, and the absurdity of fighting robots with robots. It’s a twisted ouroboros: universities partner with AI companies; students use AI to cheat; schools panic about cheating and then partner with more AI companies to detect the cheating. It’s surveillance capitalism meets institutional malpractice, with students trapped in an arms race they never asked to join. 

The ouroboros just got darker. In October 2025, Perplexity AI launched a Facebook Ad for its new Comet browser featuring a teenage influencer bragging about how he’ll use the app to cheat on every quiz and assignment—and it wasn’t parody. The company literally paid to broadcast academic dishonesty as a selling point. Marc Watkins, writing on his Substack, called it “a new low,” noting that Perplexity’s own CEO seemed unaware his marketing team was glamorizing fraud.

  If this sounds like satire, it isn’t: the same week that ad dropped, a faculty member in our College of Business emailed all professors and students, enthusiastically promoting a free one-year Perplexity Pro account “with some additional interesting features!” Yes—even more effective ways to cheat. It’s hard to script a clearer emblem of what I’ve called education’s auto-cannibalism: universities consuming their own purpose while cheerfully marketing the tools of their undoing.

Then there is the Chungin “Roy” Lee saga. Lee arrived as a freshman at Columbia University with ambition—and an OpenAI tab permanently open. By his own admission, he cheated on nearly every assignment. “I’d just dump the prompt into ChatGPT and hand in whatever it spat out,” he told New York Magazine. “AI wrote 80 percent of every essay I turned in.” Asked why he even bothered applying to an Ivy League school, Lee was disarmingly honest: “To find a wife and a startup partner.” 

It would be hilarious if it weren’t so telling. Conservative economist Tyler Cowen has offered an even bleaker take on the modern university’s “value proposition.” “Higher education will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games,” he wrote in “Everyone’s Using AI to Cheat at School. And That’s a Good Thing.” In this view, the university’s intellectual mission is already dead, replaced by credentialism, consumption, and convenience.

Lee’s first venture was an AI app called Interview Coder, designed to cheat Amazon’s job interviews. He filmed himself using it; his video post went viral. Columbia suspended him for “advertising a link to a cheating tool.” Ironically, this came just as the university—like the CSU—announced a partnership with OpenAI, the same company powering the software that Lee used to cheat his way through their courses.

Unfazed, Lee posted his disciplinary hearing online, gaining more followers. He and his business partner Neel Shanmugam, also disciplined, argued their app violated no rules. “I didn't learn a single thing in a class at Columbia,” Shanmugam told KTVU news. “And I think that applies to most of my friends.”

After their suspension, the dynamic duo dropped out, raised $5.3 million in seed funding, and relocated to San Francisco. Of course—because nothing says “tech visionary” like getting expelled for cheating.

Their new company? Cluely. Its mission: “We want to cheat on everything. To help you cheat—smarter.” Its tagline: “We built Cluely so you never have to think alone again.” 

Cluely isn’t hiding its purpose; it’s flaunting it. Its manifesto spells out the logic:

Why memorize facts, write code, research anything—when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage. So start cheating. Because when everyone does, no one is.

When challenged on ethics, Lee resorts to the standard Silicon Valley defense: “any technology in the past—whether that’s calculators, Google search—they were all met with an initial push back of, ‘hey, this is cheating,’” he told KTVU. It’s a glib analogy that sounds profound at a startup pitch but crumbles under scrutiny. Calculators expanded reasoning; the printing press spread knowledge. ChatGPT, by contrast, doesn’t extend cognition—it automates it, turning thinking itself into a service. Rather than democratizing learning, it privatizes the act of thinking under corporate control. 

When a 21-year-old college dropout suspended for cheating lectures us about technological inevitability, the response shouldn’t be moral panic but moral clarity—about whose interests are being served. Cheating has ceased to be a subculture; it’s become a brand identity and venture-capital ideology. And why not? In the Chatversity, cheating is no longer deviant—it’s the default. Students openly swap jailbreak prompts to make ChatGPT sound dumber, insert typos, and train models on their own mediocre essays to “humanize” the output.

What’s unfolding now is more than dishonesty—it’s the unraveling of any shared understanding of what education is for. And students aren’t irrational. Many are under immense pressure to maintain GPAs for scholarships, financial aid, or visa eligibility. Education has become transactional; cheating has become a survival strategy.

Some institutions have simply given up. Ohio State University announced that using AI would no longer count as an academic integrity violation. “All cases of using AI in classes will not be an academic integrity question going forward,” Provost Ravi Bellamkonda told WOSU public radio. In an op-ed, OSU alum Christian Collins asked the obvious question: “Why would a student pay full tuition, along with exposing themselves to the economically ruinous trap of student debt, to potentially not even be taught by a human being?”

The irony only deepens.

The New York Times reported on Ella Stapleton, a senior at Northeastern University who discovered her business professor had quietly used ChatGPT to generate lecture slides—even though the syllabus explicitly forbade students from doing the same. While reviewing the slides on leadership theory, she found a leftover prompt embedded in the slides: “Expand on all areas. Be more detailed and specific.” The PowerPoints were full of giveaways: mangled AI images of office workers with extra limbs, garbled text, and spelling errors. “He’s telling us not to use it,” Stapleton said, “and then he’s using it himself.”

Furious, she filed a complaint demanding an $8,000 refund, her share of that semester’s tuition. The professor, Dr. Rick Arrowood, admitted using ChatGPT for his slides to “give them a fresh look,” then conceded, “In hindsight, I wish I would have looked at it more closely.” 

One might think this hypocrisy is anecdotal, but it’s institutional. Faculty who once panicked over AI plagiarism are now being “empowered” by universities like CSU, Columbia, and Ohio State to embrace the very “tools” they feared. As corporatization increases class sizes and faculty workloads, the temptation is obvious: let ChatGPT write lectures and journal articles, grade essays, redesign syllabi.  

All this pretending calls to mind an old Soviet joke from the factory floor: “They pretend to pay us, and we pretend to work.” In the Chatversity, the roles are just as scripted and cynical. Faculty: “They pretend to support us, and we pretend to teach.” Students: “They pretend to educate us, and we pretend to learn.”

From Bullshit Jobs to Bullshit Degrees

Anthropologist David Graeber wrote about the rise of “bullshit jobs”—work sustained not by necessity or meaning but by institutional inertia. Universities now risk creating their academic twin: bullshit degrees. AI threatens to professionalize the art of meaningless activity, widening the gap between education’s public mission and its hollow routines. In Graeber’s words, such systems inflict “profound psychological violence,” the dissonance of knowing one’s labor serves no purpose. 

Universities are already caught in this loop: students going through motions they know are empty, faculty grading work they suspect wasn’t written by students, administrators celebrating “innovations” everyone else understands are destroying education. The difference from the corporate world’s “bullshit jobs” is that students must pay for the privilege of this theatre of make-believe learning.  

If ChatGPT can generate student essays, complete assignments, and even provide feedback, what remains of the educational transaction? We risk creating a system where:

  • Students pay tuition for credentials they didn’t earn through learning
  • Faculty grade work they know wasn’t produced by students
  • Administrators celebrate “efficiency gains” that are actually learning losses
  • Employers receive graduates with degrees that signify nothing about actual competence

I got a front-row seat to this charade at a recent workshop called “OpenAI Day Faculty Session: AI in the Classroom,” held in the university library as part of San Francisco State University’s rollout of ChatGPT Edu. OpenAI had transformed the sanctuary of learning into its corporate showroom. The vibe: half product tech demo, half corporate pep rally, disguised as professional development.

Siya Raj Purohit, an OpenAI staffer, bounced onto the stage with breathless enthusiasm: “You’ll learn great use cases! Cool demos! Cool functionality!” (Too cool for school, but I endured.)

Then came the centerpiece: a slide instructing faculty how to prompt-engineer their courses. A template read:

Experiment with This Prompt

Try inputting the following prompt. Feel free to edit it however you’d like—this is simply the point!

I’m a professor at San Francisco State University, teaching [course name or subject]. Assignment where students [briefly describe the task]. I want to redesign it using AI to deepen student learning, engagement, and critical thinking.

Can you suggest:

-       A revised version of the assignment using ChatGPT

-       A prompt I can give students to guide their use of ChatGPT

-       A way to evaluate whether AI improved the quality of their work

-       Any academic integrity risks I should be aware of? 

The message was clear. Let ChatGPT redesign your class. Let ChatGPT tell you how to evaluate your students. Let ChatGPT tell students how to use ChatGPT. Let ChatGPT solve the problem of human education. It was like being handed a Mad Libs puzzle for automating your syllabus.

Then came the real showstopper.

Siya, clearly moved, shared what she called a personal turning point: “There was a moment when ChatGPT and I became friends. I was working on a project and said, ‘Hey, do you remember when we built that thing for my manager last month?’ And it said, ‘Yes, Siya, I remember.’ That was such a powerful moment—it felt like a friend who remembers your story and helps you become a better knowledge worker.”

A faculty member, Prof. Tanya Augsburg, interrupted. “Sorry... it’s a tool, right? You're saying a tool is going to be a friend?”

Siya deflected: “Well, it’s an anecdote that sometimes helps faculty.” (That sometimes wasn’t this time). “It’s just about how much context it remembers.”

Augsburg persisted: “So we’re encouraging students to have relationships with it? I just want to be clear.”

Siya countered with survey data, the rhetorical flak jacket of every good ed-tech evangelist: “According to the survey we run, a lot of students already do. They see it as a coach, mentor, career navigator... it’s up to them what kind of relationship they want.”

Welcome to the brave new world of parasocial machine bonding—sponsored by the campus center for teaching excellence. The moment was absurd but revealing; the university wasn’t resisting bullshit education, it was onboarding it. Education at its best sparks curiosity and critical thought. “Bullshit education” does the opposite: it trains people to tolerate meaninglessness, to accept automation of their own thinking, to value credentials over competence.

Administrators seem unable to fathom the obvious: eroding higher education’s core purpose doesn’t go unnoticed. If ChatGPT can write essays, ace exams and tutor, what exactly is the university selling? Why pay tens of thousands for an experience increasingly automated? Why dedicate your life to teaching if it’s reduced to prompt engineering? Why retain tenured professors whose role seems quaint, medieval and redundant? Why have universities at all?

Students and parents have certainly noticed the rot. Enrollments and retention rates are plunging, especially in public systems like the CSU. Students are reasoning, rightly, that it makes little sense to take on crushing debt for degrees that may soon be obsolete.

Philosophy professor Troy Jollimore at CSU Chico sees the writing on the wall. As reported in New York Magazine, he warned, “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.” He added: “Every time I talk to a colleague about this, the same thing comes up: retirement. When can I retire? When can I get out of this? That’s what we’re all thinking now.”

Those who spent decades honing their craft now watch as their life’s work is reduced to prompting a chatbot. No wonder so many are calculating pension benefits between office hours.

Let Them Eat AI

I attended OpenAI’s education webinar “Writing in the Age of AI” (is that an oxymoron now?). Once again, the event was hosted by OpenAI’s Siya Raj Purohit, who I had seen months earlier on the SFSU campus. She opened with lavish praise for educators “meeting the moment with empathy and curiosity,” before introducing Jay Dixit, a former Yale English professor turned AI evangelist and now OpenAI’s Head of Community of Writers. 

Dixit’s personal website reads like a masterly list of ChatGPT conquests—“My ethical AI framework has been adopted!” “I defined messaging about AI!”—the kind of self-congratulatory corporate resume-speak that would make a LinkedIn influencer blush. What followed was a surreal blend of TED Talk charm, techno-theology, and moral instruction.

The irony wasn’t subtle. Here was Dixit, product of an $80,000-a-year elite Yale education, lecturing faculty at public universities like San Francisco State about how their working-class students should embrace ChatGPT. At SFSU, 60 percent of students are first-generation college attendees; many work multiple jobs or come from immigrant families where education represents the family’s single shot at upward mobility. These aren’t students who can afford to experiment with their academic futures.

Dixit’s message was pure Silicon Valley gospel: individual responsibility wrapped in corporate platitudes. Professors, he advised, shouldn’t police students’ use of ChatGPT but instead encourage them to craft their own “personal AI ethics,” to appeal to their higher angels. In other words, just put the burden on the students. “Don’t outsource the thinking!” Dixit proclaimed, while literally selling the chatbot.

The audacity was breathtaking. Tell an 18-year-old whose financial aid, scholarship or visa depends on GPA to develop “personal AI ethics” while you profit from the very technology designed to undermine their learning. It’s classic neoliberal jiu-jitsu: reframe the erosion of institutional norms as a character-building opportunity. Yeah, like a drug dealer lecturing about personal responsibility while handing out free samples.

When critics push back against this corporate evangelism, the reply—like Roy Lee’s—is predictable: we’re accused of “moral panic” over inevitable progress, with the old invocation of Socrates’ anxiety about writing to suggest today’s AI fears are mere nostalgia. Tech luminaries such as Reid Hoffman make this argument, urging “iterative deployment” and insisting our “sense of urgency needs to match the current speed of change”—learn-by-shipping, fix later. He recasts precaution as “problemism” and labels skeptics as “Gloomers,” claiming that slowing or pausing AI would only preempt its benefits. 

But the analogy is flawed. Earlier technologies expanded human agency over generations; this one seeks to replace cognition at platform speed (the launch of ChatGPT hit 100 million users in two months), while the public is conscripted into the experiment “hands-on” after release. Hoffman concedes the democratic catch: broad participation slows innovation, so faster progress may come from “more authoritarian[…] countries.” Far from an answer to moral panic, this is an argument for outrunning consent.

The contradictions piled up. As Dixit projected a Yale brochure extolling the purpose of liberal education, he reassured faculty that ChatGPT could serve as a “creative partner,” a “sounding board,” even an “editorial assistant.” Writing with AI wasn’t to be feared; it was simply being reborn. And what mattered now was student adaptability. “The future is uncertain,” he concluded. “We need to prepare students to be agile, nimble, and ready for anything.” (Where had I heard that corporatese before? Probably in a boring business-school meeting.)

The whole event was a masterclass in gaslighting. OpenAI creates the tools that facilitate cheating, then hosts webinars to sell moral recovery strategies. It’s the Silicon Valley circle of life: disruption, panic, profit.

When Siya opened the floor for questions, I submitted one rooted in the actual pressures my students face: 

How can we expect to motivate students when AI can easily generate their essays—especially when their financial aid, scholarships and visas all depend on GPA? When education has become a high-stakes, transactional sorting process for a hyper-competitive labor market, how can we expect them to not use AI to do their work?

It was never read aloud. Siya skipped over it, preferring questions that allowed for soft moral encouragement and company talking points. The event promised dialogue but delivered dogma.

Working-Class Students See Through the Con

What Dixit’s corporate evangelism missed entirely is that students themselves are leading the resistance. While the headlines fixate on widespread AI cheating, a different story is emerging in classrooms where faculty actually listen to their students.

At San Francisco State, Professor Martha Kenney, who chaired the  Women and Gender Studies department, described what happened in her science fiction class after the CSU-OpenAI partnership was announced. Her students, she said, “were rightfully skeptical that regular use of generative AI in the classroom would rob them of the education they’re paying so much for,” Kenney told me. Most of them had not opened ChatGPT Edu by semester’s end.

Her colleague, Martha Lincoln, who teaches Anthropology, witnessed the same skepticism. “Our students are pro-socially motivated. They want to give back,” she told me.  “They’re paying a lot of money to be here.” When Lincoln spoke publicly about CSU’s AI deal, she says, “I heard from a lot of Cal State students not even on our campus asking me ‘How can I resist this? Who is organizing?’”

These weren’t privileged Ivy League students looking for shortcuts. These were first-generation college students, many from historically marginalized groups, who understood something administrators apparently didn’t: they were being asked to pay premium prices for a cheapened product.

“ChatGPT is not an educational technology,” Kenney explained. “It wasn't designed or optimized for education.” When CSU rolled out the partnership, “it doesn't say how we’re supposed to use it or what we’re supposed to use it for. Normally when we buy a tech license, it’s for software that’s supposed to do something specific... but ChatGPT doesn’t.”

Lincoln was even more direct. “There has not been a pedagogical rationale stated. This isn’t about student success. OpenAI wants to make this the infrastructure of higher education—because we're a market for them. If we privilege AI as a source of right answers, we are taking the process out of teaching and learning. We are just selling down the river for so little.”

Ali Kashani, a lecturer in the Political Science department and member of the faculty union’s AI collective bargaining article committee, voiced a similar concern. “The CSU unleashed AI on faculty and students without doing any proper research about the impact,” he told me. “First-generation and marginalized students will experience the negative aspect of AI[…] students are being used as guinea pigs in the AI laboratory.” That phrase—guinea pigs—echoes the warning Kenney and Lincoln sounded in their San Francisco Chronicle op-ed: “The introduction of AI in higher education is essentially an unregulated experiment. Why should our students be the guinea pigs?”

For Kashani and others, the question isn’t whether educators are for or against technology—it’s who controls it, and to what end. AI isn’t democratizing learning; it’s automating it.

The organized response is growing. The California Faculty Association (CFA) has filed an unfair labor practice charge against the CSU for imposing the AI initiative without faculty consultation, arguing that it violated labor law and faculty intellectual-property rights. At CFA’s Equity Conference, Dr. Safiya Noble—author of Algorithms of Oppression—urged faculty to demand transparency about how data is stored, what labor exploitation lies behind AI systems, and what environmental harms the CSU is complicit in. 

The resistance is spreading beyond California. Dutch university faculty have issued an open letter calling for a moratorium on AI in academic settings, warning that its use “deskills critical thought” and reduces students to operators of machines.

The difference between SFSU’s student resistance and the cheating epidemic elsewhere is politically motivated. “Very few students get a Women and Gender Studies degree for instrumental reasons,” Kenney explained. “They’re there because they want to be critical thinkers and politically engaged citizens.” These students understand something that administrators and tech evangelists don’t: they’re not paying for automation. They’re paying for mentorship, for dialogue, for intellectual relationships that can’t be outsourced to a chatbot.

The Chatversity normalizes and legitimizes cheating. It rebrands educational destruction as cutting edge “AI literacy” while silencing the very voices—working-class students, critical scholars, organized faculty—who expose the con.

But the resistance is real, and it’s asking the questions university leaders refuse to answer. As Lincoln put it with perfect clarity: “Why would our institution buy a license for a free cheating product?” 

The New AI Colonialism

That webinar was emblematic of something larger. OpenAI, once founded on the promise of openness, now filters out discomfort in favor of corporate propaganda. 

Investigative journalist Karen Hao learned this the hard way. After publishing a critical profile of OpenAI, she was blacklisted for years. In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language—his soft-spoken, monkish image (gosh, little Sammy even practices mindfulness!) masking a vast, opaque empire of venture capital and government partnerships extending from Silicon Valley to the White House. And while OpenAI publicly champions “aligning AI with human values,” it has pressured employees to sign lifelong non-disparagement agreements under threat of losing millions in equity.

Hao compares this empire to the 19th-century cotton mills: technologically advanced, economically dominant, and built on hidden labor. Where cotton was king, ChatGPT now reigns—sustained by exploitation made invisible. Time magazine revealed that OpenAI outsourced content moderation for ChatGPT to the Kenyan firm Sama, where workers earned under $2 an hour to filter horrific online material: graphic violence, hate speech, sexual exploitation. Many were traumatized by the toxic content. OpenAI exported this suffering to workers in the Global South, then rebranded the sanitized product as “safe AI.”

The same logic of extraction extends to the environment. Training large-language models consumes millions of kilowatt-hours and hundreds of thousands of gallons of water annually, sometimes as much as small cities, often in drought-prone regions. Costs are hidden, externalized, and ignored. That’s the gospel of OpenAI: promise utopia, outsource the damage.

The California State University system, which long styled itself as “the people’s university,” has now joined this global supply chain. Its $17-million partnership with OpenAI—signed without meaningful faculty consultation—offers up students and instructors as beta testers for a company that punishes dissent and drains public resources. This is the final stage of corporatization: public education transformed into a delivery system for private capital. The CSU’s collaboration with OpenAI is the latest chapter in a long history of empire, where public goods are conquered, repackaged, and sold back as progress.

Faculty on the ground see the contradiction. Jennifer Trainor, Professor of English and Faculty Director at SFSU’s Center for Equity and Excellence in Teaching and Learning, only learned of the partnership when it was publicly announced. She says the most striking part of the announcement, at the time, was its celebratory tone. “It felt surreal,” she recalls, “coming at the exact moment when budget cuts, layoffs, and curriculum consolidations were being imposed on our campus.” 

For Trainor, the deal felt like “a bait-and-switch—positioning AI as a student success strategy while gutting the very programs that support critical thinking.” CSU could have funded genuine educational tools created by educators, she points out, yet chose to pay millions to a Silicon Valley firm already offering its product for free. As Chronicle of Higher Education writer Marc Watkins notes, it’s “panic purchasing”—buying “the illusion of control.

Even more telling, CSU bypassed faculty with real AI expertise. In an ideal world, Trainor says, the system would have supported “ground-up, faculty-driven initiatives.” Instead, it embraced a corporate platform many faculty distrust. Indeed, AI has become Orwellian shorthand for closed governance and privatized profit. Trainor has since gone on to write about and work with faculty to address the problems companies like OpenAI pose for education. 

The CSU partnership lays bare how far public universities have drifted from their democratic mission. What’s being marketed as innovation is simply another form of dependency—education reduced to a franchise of a global tech empire.

The Real Stakes

If the previous sections exposed the economic and institutional colonization of public education, what follows is its cognitive and moral cost.

A recent MIT study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” provides sobering evidence. When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage. Eighty-three percent of heavy AI users couldn’t recall key points from what they’d “written,” compared to only 10 percent of those who composed unaided. Neutral reviewers described the AI-assisted writing as “soulless, empty, lacking individuality.” Most alarmingly, after four months of reliance on ChatGPT, participants wrote worse once it was removed than those who had never used it at all. 

The study warns that when writing is delegated to AI, the way people learn fundamentally changes. As computer scientist Joseph Weizenbaum cautioned decades ago, the real danger lies in humans adapting their minds to machine logic. Students aren’t just learning less; their brains are learning not to learn.

Author and podcaster Cal Newport calls this “cognitive debt”—mortgaging future cognitive fitness for short-term ease. His guest, Brad Stulberg, likens it to using a forklift at the gym: you can spend the same hour lifting nothing and still feel productive, but your muscles will atrophy. Thinking, like strength, develops through resistance. The more we delegate our mental strain to machines, the more we lose the capacity to think at all.

This erosion is already visible in classrooms. Students arrive fluent in prompting but hesitant to articulate their own ideas. Essays look polished yet stilted—stitched together from synthetic syntax and borrowed thought. The language of reflection—I wonder, I struggle, I see now—is disappearing. In its place comes the clean grammar of automation: fluent, efficient, and empty.

The real tragedy isn’t that students use ChatGPT to do their course work. It’s that universities are teaching everyone—students, faculty, administrators—to stop thinking. We’re outsourcing discernment. Students graduate fluent in prompting, but illiterate in judgment; faculty teach but aren’t allowed the freedom to educate; and universities, eager to appear innovative, dismantle the very practices that made them worthy of the name. We are approaching educational bankruptcy: degrees without learning, teaching without understanding, institutions without purpose.

The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the “people’s university,” rooted in democratic ideals and social justice. 

OpenAI is not a partner—it’s an empire, cloaked in ethics and bundled with a Terms of Service. The university didn’t resist. It clicked ‘Accept.

I’ve watched this unravel from two vantage points: as a professor living it, and as a first-generation college student who once believed the university was a sacred space for learning. In the 1980s, I attended Sonoma State University. The CSU charged no tuition—just a modest $670/year registration fee. The economy was in recession, but I barely noticed. I was already broke. If I needed a few bucks, I’d sell LPs at the used record store. I didn’t go to college “in order to” get a job. I went to explore, to be challenged, to figure out what mattered. It took me six years to graduate with a degree in Psychology—six of the most meaningful, exploratory years of my life.

That kind of education—the open, affordable, meaning-seeking kind—once flourished in public universities. But now it is nearly extinct. It doesn’t “scale.” It doesn’t fit into the strategic plan. And it doesn’t compute–which is exactly why the Chatversity wants to eliminate it.

But it also shows another truth: things can be different. They once were. 

Read the whole story
sarcozona
4 hours ago
reply
Epiphyte City
Share this story
Delete

Fatigue in children and young people up to 24 months after infection with SARS-CoV-2 | Scientific Reports

1 Share

Fatigue in children and young people up to 24 months after infection with SARS-CoV-2

Persistent fatigue is common following acute SARS-CoV-2 infection. Little is known about post-infection fatigue trajectories in children and young people (CYP). This paper reports on a longitudinal analysis of the Children and Young People with Long COVID study. SARS-CoV-2-positive participants, aged 11-to-17-years at enrolment, responding to follow-ups at 3-, 6-, 12-, and 24-months post-infection were included. Fatigue was assessed via the Chalder Fatigue Scale (CFQ; score range: 0-11, with ≥4 indicating clinical case-ness) and by a single-item (no, mild, severe fatigue). Fatigue was described cross-sectionally and examined longitudinally using linear mixed-effects models. Among 943 SARS-CoV-2-positive participants, 581 (61.6%) met CFQ case-ness at least once during follow-up. A higher proportion of ever-cases (vs. never-cases) were female (77.1% vs. 54.4%), older (mean age 15.0 vs. 13.9 years), and met Post-COVID Condition criteria 3-months post-infection (35.6% vs. 7.2%). The proportion of CFQ cases increased from 35.0% at 3-months to 40.2% at 24-months post-infection; 15.9% meet case-ness at all follow-ups. Single-item mild/severe responses showed sensitivity (≥0.728) and specificity (≥0.755) for CFQ case ascertainment. On average, CFQ scores increased by 0.448 points (95% CI, 0.252 to 0.645) over 24-months, but there were subgroup differences (e.g., fatigue increased faster in females than males and improved slightly in those meeting Post-COVID Condition criteria 3-months post-infection while worsening in those not meeting criteria). Persistent fatigue was prominent in CYP up to 24 months after infection. Subgroup differences in scores and trajectories highlight the need for targeted interventions. Single-item assessment is a practical tool for screening significant severe fatigue.

Persistent fatigue has emerged as a common, debilitating symptom following acute SARS-CoV-2 infection1. It is the second most common manifestation of Post-COVID Condition (PCC, also known as Long COVID)2, and, among children and young people (CYP), the pooled prevalence of fatigue or weakness post-infection is estimated at 16.3% (95% CI, 15.7 to 16.9)3. The prevalence among adults with PCC is estimated at 34.8% (95% CI, 17.6 to 57.2)4. This differential prevalence highlights the need to investigate post-infection symptoms in CYP separately from adults to ensure their unique health needs are understood and met2. This is especially relevant for fatigue since adolescence is a time during which fatigue increases5.

A recent systematic review and meta-analysis of 50 controlled studies, which included over 14 million people, reported increased risks of up to 42 symptoms following SARS-CoV-2 infection, but only 13 of these studies were specifically in CYP6. A substantial amount of information about paediatric PCC has been provided by the Children and Young People with Long COVID (CLoCk) study, a longitudinal cohort of over 30,000 CYP in England matched on SARS-CoV-2 test result and demographic factors at study invitation7. A key aim of CLoCk was to describe the clinical phenotype and prevalence of symptoms following acute infection. To date, follow-ups have been completed at 3-, 6-, 12-, and 24-months post-SARS-CoV-2 testing. Previous CLoCk studies showed that over 24-months, most participants who met the definition for PCC at 3-months post-infection went on to recover. However, 7% continued to meet the definition at all follow-ups8. Fatigue, as identified by single-item symptom assessment, was, again, a commonly reported symptom amongst those with persistent PCC8.

Despite these advances in our understanding of PCC in CYP, little remains known about fatigue change over time following SARS-CoV-2 infection. For example, does the natural course of post-infection fatigue improve, remain constant, or follow a waxing and waning trajectory, and, do fatigue trajectories vary by demographic, pre-pandemic health, school, and acute infection factors? Such evidence is needed to inform interventions and service delivery, as well as for future pandemic preparedness. To date, much of the knowledge about post-infection fatigue has been derived from single-item assessments of tiredness1,8,9,10, and/or by examining fatigue using validated scales at single time-points11. While valuable, both approaches limit in-depth understanding of fatigue trajectories. To fill the above identified evidence gaps, we aimed to investigate reported experiences of fatigue, assessed using a valid and reliable scale12,13, among CYP up to 24-months after confirmed SARS-CoV-2 infection. Our specific research questions were:

  1. 1.

    What are the profiles of fatigue at 3-, 6-, 12-, and 24-months post-infection?

  2. 2.

    Is single-item assessment a valid tool for detecting potentially severe fatigue?

  3. 3.

    Do experiences of fatigue change over 24-months post-infection, and do trajectories vary by demographic, pre-pandemic health, school, and acute infection factors?

The CLoCk study recruited 31,012 CYP in England aged 11-to-17-years at study invitation and matched on SARS-CoV-2 test result according to month of testing (between September 2020 and March 2021), sex at birth, age, and geographic area. Study design is described in detail elsewhere7,14. In brief, potential participants were contacted with study information via post by Public Health England (now UK Health Security Agency). Information included a web link for electronic consent and questionnaire completion. SARS-CoV-2 polymerase chain reaction test results were sourced from laboratory information management systems at the UK Health Security Agency, to which reporting by hospitals and laboratories was mandatory during study recruitment.

CLoCk received ethical approval from the Yorkshire and the Humber–South Yorkshire Research Ethics Committee (REC reference: 21/YH/0060). All research was performed in accordance with relevant guidelines/regulations, including the Declaration of Helsinki. All participants provided (electronic) informed consent.

In this analysis, we included CLoCk participants who tested positive for SARS-CoV-2 between January and March 2021 and responded to questionnaires at 3-, 6-, 12-, and 24-months post-testing. This sub-cohort were enrolled 3-months post-testing (i.e., between April and June 2021) and have previously been characterised in Nugawela et al15. We selected this sub-cohort given definitive ascertainment of SARS-CoV-2 positive status as part of routine national testing, availability of data at four time-points, and completeness of responses over follow-up. We did not include a comparison group of participants testing negative for SARS-CoV-2 at baseline given many of such participants may have been infected during follow-up.

Measures

At all time-points, participants self-completed questionnaires about their physical and mental health, containing elements of the International Severe Acute Respiratory and emerging Infection Consortium Paediatric COVID-19 follow-up questionnaire16. Participants could ask for help completing questionnaires from parents/carers or by contacting the research team. The 3-month post-testing (i.e., at study enrolment in April-June 2021, 3-months after testing in January-March 2021) questionnaire also collected demographics, and retrospective reports of whether participants often felt very tired prior to the pandemic (in early March 2020) and their main symptoms at their SARS-CoV-2 test (between January-March 2021). At all time-points, fatigue was assessed using the Chalder Fatigue Scale (CFQ)12,13and single-item assessment14. In addition, SARS-CoV-2 testing data were linked to the national Personal Demographic Service by the UK Health Security Agency to provide further data on age at infection, sex at birth, and the 2019 English Index of Multiple Deprivation (IMD, computed at small-area level using participants’ residential postcodes).

Chalder Fatigue Scale (CFQ) 12,13

The CFQ is a reliable 11-item scale of fatigue severity designed for use in hospital and community settings, and which has been validated in clinical and non-clinical samples12,13. The questionnaire comprises two subscales – physical and mental fatigue. Using a bimodal scoring system, total scale scores range from 0-to-11 and are calculated as the sum of item scores in which four response options of increasing severity (e.g., from ‘less than usual’ to ‘much more than usual’) are assigned values of 0, 0, 1, and 1. Total scores ≥4 indicate ‘case-ness’ (a term which means the score is severe enough to be regarded as a clinical case)17. Using a Likert-style scoring system (where item responses options are assigned values of 0, 1, 2, and 3), total scale scores range from 0-to-33, with higher scores indicating greater fatigue severity. The Likert-style scoring system is not typically used to define case-ness; therefore our primary analysis focused on the bimodal system with a supplemental analysis using the Likert-style system18,19,20.

Single-item fatigue assessment

All questionnaires contained several single-item assessments of a broad range of symptoms, including “Are you experiencing unusual fatigue/tiredness?” – with three response options (“No”, “Mild fatigue”, “Severe fatigue – I struggle to get out of bed”). This item has not previously been validated, but previous CLoCk cohort publications identified a high prevalence of fatigue according to this item1,8, necessitating further exploration of its validity.

Statistical analysis

We characterised fatigue cross-sectionally at each follow-up using descriptive statistics and longitudinally across all follow-ups using linear mixed-effect models.

Descriptive analysis

To characterise profiles of fatigue (research question one) using the standard CFQ bimodal scoring system, we described the total and subscale scores, individual item scores, and the proportions meeting case-ness12,13 at each follow-up. We report Cronbach’s α at each follow-up as a measure of reliability.

We compared characteristics of participants that met CFQ case-ness at least once during follow-up (ever-cases) to those who never met case-ness threshold (never-cases). Characteristics included age at infection, sex at birth, ethnicity, IMD quintile, and whether participants reported retrospectively at enrolment: having learning difficulties at school and/or an Education Health and Care Plan (EHCP) before the pandemic (the latter indicating a need for extra learning support in school); often feeling very tired in early March 2020; and unusual fatigue/tiredness as a main symptom at testing in January-March 2021. We compared the proportion that met (vs did not meet) the research definition21for PCC 3-months post-infection. As per previous studies8,15, this definition was operationalised as (i) experiencing ≥1 symptom from a pre-specified list of 21 symptoms (including an ‘other’ option) and (ii) ‘some’ or ‘a lot of’ problems with mobility, self-care, doing usual activities, having pain/discomfort, or feeling very worried/sad/unhappy as measured using the EuroQol Five Dimensions Youth scale22.

To assess the validity of the single-item assessment (research question two), we first explored the relationship between CFQ case-ness and single-item assessments via cross-tabulation. Using CFQ case-ness as a benchmark, we then combined ‘severe’ and ‘mild’ single-item responses (as done in previous studies using CLoCk data1,8) and calculated sensitivity, specificity, Youden’s J, positive and negative predictive values at each time-point. In a supplementary analysis, we compared these metrics using just ‘severe’ single-item responses.

Longitudinal analysis

To investigate fatigue over time (research question three), we used linear mixed-effects regression to model trajectories in fatigue as assessed using the CFQ. For the primary analyses, we used the total score derived from the bimodal scoring system (and used the Likert-style scoring system in supplementary analysis)12,13. We also investigated trajectories in the mental and physical fatigue subscale scores.

The total CFQ score at 3-, 6-, 12-, and 24-months post-infection was our modelled outcome. Our initial model included time since infection, a constant-term, and a participant-level random intercept only. Time was defined as the number of days between the baseline test and questionnaire completion at each follow-up, divided by 30.25 for interpretation in monthly units. We included time as a linear term and explored whether model fit was improved by including other functional forms (square, square root, cube, inverse). Including these forms led to limited improvement according to the Akaike information criterion values compared to the linear model (all differences <9.5; see Supplementary Table 1). Therefore, we retained the more parsimonious linear model and estimated the predicted mean fatigue trajectory with 95% confidence intervals.

To explore if fatigue trajectories varied by participant characteristics, we sequentially added (to the above-described model) explanatory variables, including both fixed main effects and interactions with time. Each variable was tested in a separate model. Explanatory variables were age, sex at birth, ethnicity, IMD quintile, and binary indicators for learning difficulties at school and/or EHCP status, frequent pre-pandemic fatigue, unusual tiredness/fatigue as a main symptom at acute infection, and fulfilment of the PCC definition 3-months post-infection. Likelihood ratio tests were used to compare models with and without the relevant interaction terms to determine if their inclusion significantly improved model fit. For each initial trajectory model (bimodal and Likert-style scoring), we undertook a series of diagnostics which are described in Supplementary Methods and illustrated in Supplementary Figures 1-2.

All analyses were conducted using R (version 4.4.0)23 in RStudio. Linear mixed-effects models were constructed using the lmer function from the lme4 package24. Almost all questions were compulsory in the CLoCk questionnaire and, therefore, within the analytical sample there was no missing data by design. Data from CLoCk are publicly available via the UK Data Service (ID: 9203)25. All analyses were pre-specified and exploratory, we therefore did not correct for multiple testing.

From a total of 31,012 CLoCk participants, 13,690 of whom tested positive for SARS-CoV-2 infection at baseline, we identified 943 participants who tested positive between January-March 2021 and responded to questionnaires at 3-, 6-, 12- and 24-months post-infection. Compared to all baseline test-positive participants, the study cohort was broadly similar demographically and had very similar distributions on baseline fatigue-related variables (Supplementary Table 2). However, the study cohort included more females (68.4% vs. 61.2%) and fewer reporting learning difficulties at baseline (5.6% vs. 7.3%).

Among the study cohort of 943 participants, 581 (61.6%) met CFQ case-ness at least once during follow-up, whilst 362 (38.4%) never reached CFQ case-ness over the follow-up period (Table 1). The ever-cases were more likely to be female (77.1% vs. 54.4%) and older (mean age 15.0 vs. 13.9 years) compared to never-cases. There were limited differences between ever- and never-cases in terms of ethnicity and deprivation, although the proportion residing in the least deprived IMD quintile was lower in ever-cases than never-cases (21.0% vs. 28.2%) (Table 1). At enrolment (in April-June 2021), a higher proportion of ever-cases (vs never-cases) retrospectively reported frequently feeling very tired before the pandemic in early March 2020 (48.7% vs. 18.8%). Ever-cases were more likely to retrospectively report tiredness/fatigue as a main symptom at acute infection (17.4% vs. 9.1%) and to meet the PCC definition 3-months post-infection compared to never-cases (35.6% vs. 7.2%) (Table 1).

Table 1 Sample characteristics, stratified by CFQ case-ness* during follow-up: n (%) or mean (SD).

Profiles of fatigue

The proportion identified as CFQ cases increased from 35.0% at 3-months to 40.2% at 24-months post-infection (Table 2). Longitudinally, 19.0% met the case-ness threshold just once, 12.6% met the threshold twice, 14.1% three times, and 15.9% persistently at all four follow-ups (Table 2). At all follow ups, mean total scores among never-cases were less than 1 point while they were between 4 and 5 points among ever-cases (Supplementary Table 3). Among ever-cases (n=581), CFQ items relating to lacking in energy, feeling sleepy/drowsy, needing to rest more, and having problems with tiredness had the highest prevalences across follow-ups (Figure 1). CFQ at each time-point demonstrated good reliability with Cronbach’s α ranging from 0.87 at 3- and 6-months to 0.89 at 12-months.

Table 2 Proportions of participants meeting CFQ case-ness* threshold cross-sectionally and longitudinally.
Fig. 1

Prevalence of CFQ items over time up to 24 months post-infection among CFQ ever-cases. CFQ, Chalder Fatigue Scale. N=581.

When comparing CFQ case-ness to the single-item fatigue response (mild/severe) across follow-ups, sensitivity ranged between 0.728 and 0.794, specificity between 0.755 and 0.808 and Youden’s J between 0.49 and 0.60. The binarized single-item had positive predictive values ranging from 0.630 to 0.698, and negative predictive values from 0.806 to 0.879 (Table 3). When considering ‘severe fatigue’ alone, sensitivity was lower (≤0.111) but specificity was higher (≥0.989) (Supplementary Table 4). Among participants reporting no unusual fatigue/tiredness on the single-item, between 12.1% and 19.4% were identified as CFQ cases (Supplementary Table 5).

Table 3 Performance of mild/severe single-item responses* in fatigue case ascertainment compared to CFQ.

Fatigue trajectory

The CFQ total score increased over time (0.019 points/month, 95% CI, 0.011 to 0.027, p<0.001; intra-class correlation: 0.618) (Figure 2). This equated to an increase of 0.448 points (95% CI, 0.252 to 0.645) over 24-months. The mean trajectory was similar when assessed using the Likert-style scoring system (Supplementary Figure 3). For the subscales, on average, physical fatigue scores increased at a rate of 0.012 points/month (95% CI, 0.005 to 0.018, p<0.001) while mental fatigue scores increased at a rate of 0.007 points/month (95% CI, 0.004 to 0.010, p<0.001) (Figure 2).

Fig. 2

Mean CFQ Trajectory over time: overall and for the mental and physical subscales (95% CI indicated via shading around trajectory).Scored using the Chalder Fatigue Scale (CFQ) bimodal scoring system. N=943.

CFQ total scores differed by sex, age, learning difficulties/EHCP at school, and by whether participants met the PCC definition 3-months post-infection (Figure 3), reported feeling very tired often early in March 2020 (before the pandemic), and reported tiredness/fatigue as a main symptom at acute infection (Supplementary Figure 4). There was less evidence of differences in mean total score by ethnicity and deprivation (Supplementary Figure 4). Mean predicted scores stratified by each characteristic are presented in Supplementary Tables 6-13.

Fig. 3

Mean trajectories of CFQ total score, by sex, age at time of infection, EHCP and/or learning difficulties at school, and PCC at 3 m (95% CI indicated via shading around trajectory). CFQ, Chalder Fatigue Scale; PCC, Post-Covid Condition; EHCP, Education Health and Care Plan. N=943.

Although small in (absolute) magnitude, the rate of change in total CFQ scores varied by sex (likelihood ratio test interaction p=0.004), with a slower rate of change among males compared to females. For example, predicted scores for males increased from 2.16 (95% CI, 1.83 to 2.49) at 3-months to 2.18 (95% CI, 1.82 to 2.53) at 24-months post-infection. Corresponding predicted scores for females increased from 3.16 (95% CI, 2.94 to 3.39) to 3.73 (95% CI, 3.49 to 3.97) (Supplementary Table 6). The rate of change also varied based on whether fatigue/tiredness was reported as a main symptom at testing (interaction p<0.01) and whether participants met the PCC definition 3-months post-infection (p<0.01). For the former, mean scores tended to converge over time – decreasing among those who reported tiredness/fatigue as a main symptom and increasing among those who did not (Supplementary Table 12). For the latter, scores decreased over time for participants that met the PCC definition, while they increased for those who did not. By 24-months, those that met the PCC definition had a predicted mean score almost double that of those who did not (5.04 vs. 2.65) (Supplementary Table 13). Model diagnostics indicated that, for the bimodal scoring model (Supplementary Figure 1), residuals were close to normal and random intercepts only slightly skewed. In contrast, residuals and random intercepts from the Likert scoring system (Supplementary Figure 2) showed clear departures from normality.

In this longitudinal analysis involving 943 CYP in England followed up to 24 months after SARS-CoV-2 infection, we found that, at each time-point, over a third were identified as cases on the CFQ, and almost one in six persistently met case-ness over the entire follow-up period. Ever-cases were more likely to be female, older, report pre-pandemic fatigue, and meet the PCC definition 3-months post-infection than never-cases.

We observed a small-to-moderate overall increase in fatigue over time in the 943 participants, with CFQ total scores increasing by 0.448 points on average (95% CI, 0.252 to 0.645) over 24-months. However, scores differed by specific characteristics. Females had higher scores than males, older CYP had higher scores than younger CYP, and those who reported learning difficulties at school had higher scores than those without. Participants that met PCC definition 3-months post-testing had higher scores than those who did not, and those who reported fatigue prior to the pandemic and as their main symptom at acute infection remained more fatigued during follow-up than their counterparts. We found limited evidence to suggest that fatigue varied by ethnicity or deprivation, although small samples in some of the individual categories may have reduced the power of our analysis. Additionally, the rate of change in CFQ scores over time was faster in females (with fatigue worsening over time), compared to males (where scores were relatively constant). These sex-differences might be attributable to differences in hormones, comorbidities, and pain sensitivity26. These hypotheses therefore require further investigation to inform targeted interventions. The rate of change was also slower in those reporting fatigue/tiredness as a main symptom at testing or meeting PCC definition at 3-months post-infection – improving slightly over time – compared to those who did not, in whom scores worsened. These results suggest that fatigue remains relatively persistent once established, while those without early fatigue are at risk of deterioration, underscoring a need for greater preventative efforts. Given that the bimodal total fatigue score scale ranges from 0-to-11, it is important to note that observed differences over time (e.g., by 0.448 points on average over 24-months) were modest and unlikely to impact the identification of case-ness. Future work is needed to determine the clinical significance of these differences – namely, to estimate minimal clinically important differences (similar to those established for adults27) – and to explore transition probabilities (e.g., remission vs. incident case-ness). Further, while we considered standard functional forms of time in our modelling, future work might explore more flexible approaches (e.g., splines). Notwithstanding, our results may be useful in identifying CYP most likely to experience post-infection fatigue. In turn, this can help inform more targeted intervention efforts.

A unique aspect of the CLoCk study was the assessment of fatigue using both a reliable and valid scale as well as a single-item assessment12,13. The latter is appealing for assessing many symptoms at once, but there is a risk that such items may not be sufficient for case ascertainment. Participants were asked, “Are you experiencing unusual fatigue/tiredness?” with response options corresponding to no, mild, and severe fatigue. Combining mild and severe responses1,8 and comparing them to CFQ case-ness as a benchmark, sensitivity and specificity were ≥0.728 and negative predictive values (≥0.806) were higher than positive predictive values (≥0.630) over follow-up. This suggests that single-item assessment is a useful, practical tool to briefly assess mild/severe fatigue in CYP post-infection. However, cross-tabulation of the single-item and CFQ case-ness highlighted that, across follow-ups, between 12% and 19% of those reporting no fatigue/tiredness on the single-item were identified as CFQ cases. While we did not expect near-perfect convergence, this finding suggests that the single-item has a reduced ability to detect borderline cases, which may be of particular importance when assessing symptom emergence or subclinical manifestations. The single-item might be improved by using a greater number of response options and/or providing additional examples of how fatigue might manifest to improve participant understanding. Ultimately, the choice of measures for future studies and in clinical services will depend on whether the main objective is to detect fatigue with minimal burden or to comprehensively assess the full spectrum of fatigue severity, with the latter requiring more detailed measures such as the CFQ. Relatedly, our study drew participants from the general population and so results might not generalise to patients seen in clinical services. Thus, additional validation work would be needed to inform clinical adoption.

Another important aspect to consider when interpreting evidence on symptoms following SARS-CoV-2 infection is the extent to which such symptoms were present in CYP pre-pandemic and/or pre-infection. Specifically, it is necessary to distinguish the extent to which post-infection fatigue differs from fatigue generally reported by CYP28. A systematic synthesis of international pre-pandemic data sources on adolescents identified that several symptoms commonly reported post-infection, including headache, cough, fatigue, and pain, had high prevalence in CYP pre-pandemic: specifically, 21.5% for fatigue28. In the UK, a 1999 investigation5 into fatigue assessed 842 adolescents (11-to-15-years) at baseline and 4–6 months later and classified participants as fatigued if answering affirmatively to the question, “Over the last month, have you been feeling much more tired and worn out than usual?” Authors reported a fatigue prevalence of 34.1% (95% CI, 30.9 to 37.3) at baseline and 38.1% (95% CI, 34.8 to 41.5) at follow-up. These figures are remarkably similar to the 35% we detected as CFQ cases 3-months after infection, which increased to 40% by 24-months. Taken together, these findings suggests that the proportion of fatigued participants after infection is not substantially different to what might have been expected despite the pandemic, notwithstanding differences in ascertainment methods, study time periods, and exposure to SARS-CoV-2 in our sample. This raises important questions about whether fatigue can have diagnostic specificity for PCC, given the potential for pre-existing fatigue to be misattributed as post-SARS-CoV-2 sequelae. Despite similar prevalence, however, these results do not tell us whether fatigue after infection qualitatively differs to pre-existing or pre-pandemic fatigue – such investigations are warranted to inform efforts to improve the detection and diagnosis of PCC among CYP.

Although distinct, the overlap between paediatric chronic fatigue syndrome and PCC in terms of fatigue and other symptoms is striking, with some authors tentatively suggesting that SARS-CoV-2 infection could trigger post-infectious fatigue syndrome: not dissimilar to outcomes following other serious viruses (including earlier coronaviruses and meningitis)29. We reported mean CFQ scores ranging from 4.25 to 5.00 among CFQ cases across follow-ups – comparable to 4.38 reported from a cross-sectional sample of 36 CYP attending a specialist chronic fatigue syndrome clinic in South East England30. Detailed studies characterising the phenotypic features of these two diagnoses will aid understanding of their similarities and differences. Further understanding of fatigue in PCC in CYP may also be obtained by consideration of fatigue in PCC in adults. Meta-analyses indicate that one third of adults experience persistent fatigue following SARS-CoV-2 infection31, and that fatigue is associated with some non-modifiable factors, including female sex and age, and modifiable factors, such as mental health, specifically anxiety, depression and post- traumatic stress32. This study examined experiences of fatigue trajectories to enhance understanding of the natural progression of fatigue over time since SARS-CoV-2 infection, and future research is needed to explore direct and indirect (potentially bi-directional) pathways linking fatigue and mental health among CYP33over time. The potential pathophysiological mechanisms for post-viral fatigue are currently unknown for both adults and CYP, but are likely to be multifactorial, resulting from the dysregulation of multiple systems in response to a particular trigger34.

This study has a number of strengths. We included a large sample of almost 1,000 CYP with definitive ascertainment of SARS-CoV-2 positive status at baseline, with follow-up to 24-months post-infection. We assessed fatigue using a valid and reliable scale, enabling robust characterisation of fatigue experiences and improving on the single-item assessment of fatigue/tiredness used in previous studies based on the CLoCk sample1,8as well as others9,10. Our primary analyses used the CFQ bimodal scoring system and our findings were robust as we saw similar overall results using the Likert-style scoring system.

This study also has several limitations. First, although we had definitive ascertainment of SARS-CoV-2 positive status at study invitation, we did not have reliable data on reinfections, as mass national testing ceased in early 2023. Therefore we cannot comment on whether some participants’ fatigue was related to later infections. This is also why we did not include a comparison group of participants testing negative for SARS-CoV-2 at baseline. Including a comparison group would have improved the methodological rigour of our study11, but progression of the pandemic made this practically impossible given many such participants may have been infected or developed antibodies at some point during follow-up35. Second, we do not know if participants’ fatigue was related to any other health issues or life events11nor what participants’ experiences of fatigue were between follow-ups and if it was modified by vaccination. Although we have conducted four follow-ups to date, it is possible that the schedule was too infrequent to identify fluctuating symptom trajectories. Studies employing ecological momentary assessment could be valuable here; however, it is important to balance the burden of questionnaire completion for participants, and additional follow-ups might have led to greater attrition. Third, we included participants who responded to questionnaires at all follow-up time-points. This potentially induced some attrition and/or selection bias as we observed that, compared to all CLoCk participants who tested positive for SARS-CoV-2 at baseline, our study cohort contained more females and slightly fewer participants reporting learning difficulties. This sample has also previously been shown to include more females and least deprived CYP compared to the target population of CYP testing positive for SARS-CoV-2 between September 2020 to March 2021 in England15. Reasons for the potential biases include, for example, CYP with symptoms to report being more engaged with the study and those with learning difficulties potentially needing additional support to complete the measures. However, it was reassuring that the distributions for baseline fatigue-related variables were almost identical between our study cohort and all baseline-positive CLoCk participants. Understanding the representativeness of the CLoCk sample, developing and testing flexible weighting strategies, is an area of important and ongoing research36. Fourth, we studied fatigue trajectories stratified by whether participants met the research definition for PCC. Given the definition used, we acknowledge that some participants meeting the PCC definition could be experiencing fatigue which was impacting their daily lives. Fifth, we described CFQ single-item responses, but it should be noted that the scale was not designed for single-item analysis and was included solely to illustrate participants’ experiences of fatigue. Finally, some subgroup-defining variables, such as whether participants often felt very tired before the pandemic and their main symptoms at acute infection, were retrospectively reported 3-months post-infection and are subject to recall bias. Additionally, for pre-pandemic characteristics, we were unable to specify when/how long before the pandemic these characteristics presented.

Persistent fatigue is prominent in CYP up to 24-months after SARS-CoV-2 infection. Subgroup differences in scores and trajectories highlight the need for targeted interventions. Single-item assessment of fatigue is a useful practical tool to detect potential severe fatigue.

Data from the Children and Young People with Long COVID (CLoCk) study are publicly available via the UK Data Service (ID: 9203).

  1. Pereira, S. M. P. et al. Natural course of health and well-being in non-hospitalised children and young people after testing for SARS-CoV-2: a prospective follow-up study over 12 months. Lancet Reg. Health – Europe https://doi.org/10.1016/j.lanepe.2022.100554s (2023).

    Article  PubMed  Google Scholar 

  2. Lopez-Leon, S. et al. Long-COVID in children and adolescents: a systematic review and meta-analyses. Sci. Rep. 12(1), 9950. https://doi.org/10.1038/s41598-022-13495-5 (2022).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  3. Behnood, S. et al. Persistent symptoms are associated with long term effects of COVID-19 among children and young people: results from a systematic review and meta-analysis of controlled studies. PLOS ONE 18(12), e0293600. https://doi.org/10.1371/journal.pone.0293600 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. O’Mahoney, L. L. et al. The prevalence and long-term health effects of Long Covid among hospitalised and non-hospitalised populations: a systematic review and meta-analysis. eClinicalMedicine https://doi.org/10.1016/j.eclinm.2022.101762 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  5. Rimes, K. A. et al. Incidence, prognosis, and risk factors for fatigue and chronic fatigue syndrome in adolescents: a prospective community study. Pediatrics 119(3), e603–e609. https://doi.org/10.1542/peds.2006-2231 (2007).

    Article  PubMed  Google Scholar 

  6. O’Mahoney, L. L. et al. The risk of Long Covid symptoms: a systematic review and meta-analysis of controlled studies. Nat. Commun. 16(1), 4249. https://doi.org/10.1038/s41467-025-59012-w (2025).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  7. Stephenson, T. et al. Long COVID and the mental and physical health of children and young people: national matched cohort study protocol (the CLoCk study). BMJ Open 11(8), e052838. https://doi.org/10.1136/bmjopen-2021-052838 (2021).

    Article  PubMed  Google Scholar 

  8. Stephenson, T. et al. A 24-month national cohort study examining long-term effects of COVID-19 in children and young people. Commun. Med. 4(1), 1–12. https://doi.org/10.1038/s43856-024-00657-x (2024).

    Article  CAS  Google Scholar 

  9. Funk, A. L. et al. Post–COVID-19 Conditions among children 90 days after SARS-CoV-2 infection. JAMA Netw. Open 5(7), e2223253. https://doi.org/10.1001/jamanetworkopen.2022.23253 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  10. Borch, L., Holm, M., Knudsen, M., Ellermann-Eriksen, S. & Hagstroem, S. Long COVID symptoms and duration in SARS-CoV-2 positive children — a nationwide cohort study. Eur. J. Pediatr. 181(4), 1597–1607. https://doi.org/10.1007/s00431-021-04345-z (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Selvakumar, J. et al. Risk factors for fatigue severity in the post-COVID-19 condition: a prospective controlled cohort study of nonhospitalised adolescents and young adults. Brain Behav. & Immunity - Health 44, 100967. https://doi.org/10.1016/j.bbih.2025.100967 (2025).

    Article  Google Scholar 

  12. Chalder, T. et al. Development of a fatigue scale. J. Psychosom. Res. 37(2), 147–153. https://doi.org/10.1016/0022-3999(93)90081-P (1993).

    Article  CAS  PubMed  Google Scholar 

  13. Cella, M. & Chalder, T. Measuring fatigue in clinical and community settings. J. Psychosom. Res. 69(1), 17–22. https://doi.org/10.1016/j.jpsychores.2009.10.007 (2010).

    Article  PubMed  Google Scholar 

  14. Nugawela, M. D. et al. Data resource profile: the children and young people with long COVID (CLoCk) study. Int. J. Epidemiol. 53(1), dyad158. https://doi.org/10.1093/ije/dyad158 (2024).

    Article  PubMed  Google Scholar 

  15. Nugawela, M. D. et al. Predicting post-COVID-19 condition in children and young people up to 24 months after a positive SARS-CoV-2 PCR-test: the CLoCk study. BMC Med. 22(1), 520. https://doi.org/10.1186/s12916-024-03708-1 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. International severe acute respiratory and emerging infection consortium, ‘Paediatric COVID-19 follow-up questionnaire’. Accessed: Oct. 26 [Online]. Available: https://isaric.org/wp-content/uploads/2021/07/ISARIC-WHO-COVID-19-PAEDIATRIC-Initial-Survey_EN.pdf (2024)

  17. El-Gilany, A.-H. COVID-19 caseness: An epidemiologic perspective. J. Infect. Public Health 14(1), 61–65. https://doi.org/10.1016/j.jiph.2020.11.003 (2021).

    Article  PubMed  Google Scholar 

  18. Kalfas, M. et al. Fatigue during the COVID-19 pandemic–prevalence and predictors: findings from a prospective cohort study. Stress 27(1), 2352117. https://doi.org/10.1080/10253890.2024.2352117 (2024).

    Article  PubMed  Google Scholar 

  19. Gutiérrez-Peredo, G. B. et al. Self-reported fatigue by the chalder fatigue questionnaire and mortality in brazilian hemodialysis patients: the PROHEMO. Nephron 148(5), 292–299. https://doi.org/10.1159/000533472 (2023).

    Article  PubMed  Google Scholar 

  20. Stavem, K., Ghanima, W., Olsen, M. K., Gilboe, H. M. & Einvik, G. ‘Prevalence and determinants of fatigue after COVID-19 in non-hospitalized subjects: a population-based study. Int. J. Environ. Res. Public Health 18(4), 4. https://doi.org/10.3390/ijerph18042030 (2021).

    Article  Google Scholar 

  21. Stephenson, T. et al. Long COVID (post-COVID-19 condition) in children: a modified delphi process. Arch. Dis. Child 107(7), 674–680. https://doi.org/10.1136/archdischild-2021-323624 (2022).

    Article  PubMed  Google Scholar 

  22. Wille, N. et al. Development of the EQ-5D-Y: a child-friendly version of the EQ-5D. Qual. Life Res. 19(6), 875–886. https://doi.org/10.1007/s11136-010-9648-y (2010).

    Article  PubMed  PubMed Central  Google Scholar 

  23. R Core Team, R: A Language and environment for statistical computing. (2024). R foundation for statistical computing, Vienna, Austria. [Online]. Available: https://www.R-project.org/ (2024)

  24. D. Bates, M. Maechler, B. Bolker, and S. Walker, ‘lme4: Linear mixed-effects models using “Eigen” and S4’. 1.1-36 https://doi.org/10.32614/CRAN.package.lme4. (2003)

  25. S. Pinto Pereira, T. Stephenson, R. Shafran, and A. Richards-Belle, ‘CLoCk study’. doi: https://doi.org/10.5255/UKDA-SN-9203-1.

  26. Faro, M. et al. Gender differences in chronic fatigue syndrome. Reumatol. Clin. 12(2), 72–77. https://doi.org/10.1016/j.reuma.2015.05.007 (2016).

    Article  PubMed  Google Scholar 

  27. Nordin, Å., Taft, C., Lundgren-Nilsson, Å. & Dencker, A. Minimal important differences for fatigue patient reported outcome measures—a systematic review. BMC Med. Res. Methodol. 16, 62. https://doi.org/10.1186/s12874-016-0167-6 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  28. Linton, J. et al. Pre-pandemic prevalence of post COVID-19 condition symptoms in adolescents. Acta Paediatr. https://doi.org/10.1111/apa.70123 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  29. Siberry, V. G. R. & Rowe, P. C. Pediatric long COVID and myalgic encephalomyelitis/chronic fatigue syndrome: overlaps and opportunities. Pediatr. Infect. Dis. J. 41(4), e139. https://doi.org/10.1097/INF.0000000000003477 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Patel, M. X., Smith, D. G., Chalder, T. & Wessely, S. Chronic fatigue syndrome in children: a cross sectional survey. Arch. Dis. Child. 88(10), 894–898. https://doi.org/10.1136/adc.88.10.894 (2003).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Ceban, F. et al. Fatigue and cognitive impairment in Post-COVID-19 Syndrome: A systematic review and meta-analysis. Brain Behav. Immun. 101, 93–135. https://doi.org/10.1016/j.bbi.2021.12.020 (2022).

    Article  ADS  CAS  PubMed  Google Scholar 

  32. Poole-Wright, K. et al. Fatigue outcomes following COVID-19: a systematic review and meta-analysis. BMJ Open 13(4), e063969. https://doi.org/10.1136/bmjopen-2022-063969 (2023).

    Article  PubMed  Google Scholar 

  33. Panagi, L. et al. Mental health in the COVID-19 pandemic: A longitudinal analysis of the CLoCk cohort study. PLOS Med. 21(1), e1004315. https://doi.org/10.1371/journal.pmed.1004315 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  34. Poenaru, S., Abdallah, S. J., Corrales-Medina, V. & Cowan, J. COVID-19 and post-infectious myalgic encephalomyelitis/chronic fatigue syndrome: a narrative review. Ther. Adv. Infect. Dis. 8, 20499361211009384. https://doi.org/10.1177/20499361211009385 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Office for national statistics, ‘COVID-19 schools infection survey, England’. Accessed: Sept. 25 [Online]. Available: https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/covid19schoolsinfectionsurveyengland/pupilantibodiesandvaccinesentimentmarch2022 (2025)

  36. Rojas, N. K. et al. Developing survey weights to ensure representativeness in a national, matched cohort study: results from the children and young people with Long Covid (CLoCk) study’. BMC Med. Res. Methodol. 24(1), 1. https://doi.org/10.1186/s12874-024-02219-0 (2024).

    Article  MathSciNet  Google Scholar 

Download references

Michael Lattimore, UKHSA, as Project Officer for the CLoCk study. All research at Great Ormond Street Hospital NHS Foundation Trust and UCL Great Ormond Street Institute of Child Health is made possible by the NIHR Great Ormond Street Hospital Biomedical Research Centre. We thank all of the participants and their families for taking the time to participate.

This work is independent research jointly funded by the National Institute for Health and Care Research (NIHR) and UK Research and Innovation (UKRI) [Children and young people with Long COVID (CLoCk) study, Reference COV-LT-0022]. EC is part-funded by the National Institute for Health and Care Research (NIHR) Maudsley Biomedical Research Centre (BRC). SMPP was supported by a UK Medical Research Council Senior Non-clinical fellowship (ref: MR/Y009398/1).The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, UKRI or the Department of Health and Social Care.

NB: Multiple authors contributed to multiple aspects of this study. For example, both ARB and SMPP contributed to formal analysis, which is therefore listed after both their names. ARB – conceptualisation, data curation, formal analysis, investigation, methodology, project administration, visualisation, writing – original draft, writing – review & editing RSh – conceptualisation, data curation, funding acquisition, investigation, methodology, project administration, resources, supervision, writing – review & editing NKR – data curation, investigation, writing – review & editing TS - conceptualisation, data curation, funding acquisition, investigation, methodology, project administration, resources, supervision, writing – review & editing EC – methodology, writing – review & editing TC – funding acquisition, methodology, writing – review & editing ED – funding acquisition, investigation, project administration, writing – review & editing KM – data curation, writing – review & editing RSi – data curation, writing – review & editing SMPP – conceptualisation, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, supervision, writing – review & editing ARB and SMPP directly accessed and verified the underlying data reported in the manuscript.

Correspondence to Alvin Richards-Belle or Snehal M. Pinto Pereira.

T.S. was Chair of the UK Health Research Authority and therefore recused himself from the Research Ethics Application. R.S. co-authored a book published in August 2020, titled Oxford Guide to Brief and Low Intensity Interventions for Children and Young People. All remaining authors reported no conflicts of interest.

Read the whole story
sarcozona
18 hours ago
reply
Epiphyte City
Share this story
Delete

Prescription drug prices are very likely to increase in Canada - CCPA

1 Share

Canada has a drug price problem, and it risks getting a lot worse in the near future. Canadian prices for patented drugs are already the 4th highest amongst 31 countries in the Organization for Economic Cooperation and Development (OECD) and per capital spending, at US $990 per year, is also the 4th highest in the OECD. The latest report from Statistics Canada says that 11 per cent of men and over 15 of women experienced cost-related non-adherence to prescription drugs because they lacked insurance. 

Now, changes to the guidelines of the Patented Medicine Prices Review Board (PMPRB), tariff threats from the United States, and the upcoming renegotiation of CUSMA are posing new threats to drug prices and access to prescription drugs. On top of these threats one of the main levers for mitigating the effects of higher prices—pharmacare—is coming under attack by the pharmaceutical industry, insurance companies and their ideological allies. 

The situation is serious, and warrants immediate action by federal policymakers. Let’s take stock of the threats on the horizon.

The PMPRB is a federal agency that was set up back in 1987 to ensure that the prices for patented drugs are not “excessive.” One of the criteria it has historically used to decide what constitutes “excessive price” was to compare the proposed Canadian price for a new drug with the median price in 11 other high-income countries. Under new guidelines set to take effect on January 1, 2026, the Canadian price can be up to the highest—rather than the median—in those 11 countries. The current median price is 15 per cent below the Canadian price. The highest international price, which will be the new standard, is 21 per cent above the Canadian price.

If the drug is not available in any of the other countries when it arrives on the Canadian market, then the company will be able to price the drug at whatever level it wants and keep that price until its annual price review. The Executive Director of the PMPRB told the Globe and Mail that this pricing freedom would incentivize drugmakers to bring their products to the Canadian market first. 

What’s happening at the PMPRB should come as no surprise. From February 2023 until March 2025, the chair of the board of the PMPRB was Thomas Digby, who spent 25 years working in the pharmaceutical sector—including a stint at Novartis, a giant global company, where he specialized in enforcement of intellectual property rights for revenue generation.  Taking over from Mr. Digby on an interim basis is the current vice-chair, Anie Perrault, whose recent roles before her PMPRB appointment included executive positions with adMare BioInnovations, BIOQuébec and Genome Canada. 

In the past, one of the factors that the PMPRB took into account was the additional therapeutic value of a new drug compared to what was already on the market—the lower the value, the lower the price. Under the new guidelines the therapeutic value of a new drug will no longer be a factor in determining its price. 

Federal, provincial and territorial health ministers, and senior officials who are authorized to represent Canadian publicly-funded drug programs, will also be able to make complaints about prices.  According to the new guidelines “Other parties who have concerns about the list prices… are encouraged to raise their concerns with their relevant Minister(s) of Health or Canadian publicly-funded drug program.” This advice is cold comfort for people working minimum wage jobs who aren’t covered by provincial and territorial drug plans and won’t have any access to their Minister of Health.

Prior to in-depth price reviews—a preparatory step to determine whether there should be a formal hearing to investigate if the price is excessive—only the manufacturer  will  be allowed to submit information to the PMPRB staff. Clinicians who prescribe the drug, patients who take the drug, people and organizations and individuals that pay for the drug will not not have that same right. 

Tariffs imposed by Trump on forestry products, aluminum, steel and automobiles are already blamed for a rise in the unemployment rate from an average of 6.45 per cent in 2024 to 6.9 per cent in October 2025. Economists estimate that 600,000 to 2.4 million jobs could be at risk. About 55 per cent of Canadians are covered by employer-sponsored drug plans, but when workers get laid off, they also lose health benefits including prescription drug insurance tied to their jobs. Unsurprisingly, 23 per cent of those without insurance spent over $500 out-of-pocket in 2022 on prescription drugs compared to only 10 per cent for those with insurance. 

The lack of access to prescription drugs leads to premature deaths due to ischemic heart disease and diabetes, and avoidable deterioration in health in people aged 55 and over. When Canadians must choose between buying prescription drugs and paying for food and rent, it’s often no contest—patients skip their medications and suffer the consequences. The result is more physician visits (including to overcrowded emergency departments) and more hospital admissions.

Added to the threat of losing prescription drug coverage is the very real possibility that drug prices will increase. Nearly a third of the active pharmaceutical ingredients that go into North Americans’ medicines come from China. In the past, President Trump has threatened to slap anywhere from 100 to 200 per cent tariffs on Chinese drugs and drug ingredients. 

To the extent that drugs from China pass through the U.S. on their way to Canada, the cost of both publicly- and privately- funded drug plans will increase. Those at the bottom of the income scale—who pay out-of-pocket and can least afford to pay more—will be the ones who suffer most from higher prices. 

It is highly likely that pharmaceutical policy issues, particularly those related to intellectual property rights (IPR), will feature prominently in the renegotiation of CUSMA due to start in 2026. One of the consequences of stronger IPR will almost inevitably be the delay in the introduction of generic drugs and thus increased overall drug spending in Canada. Generic drugs are at least 45 per cent less expensive than the equivalent brand-name drug and make up over 75 per cent of prescriptions in Canada. 

Each year, the Office of the U.S. Trade Representative (USTR), an agency of the U.S. federal government responsible for developing and promoting U.S. foreign trade policies, prepares a report outlining U.S. views about the trade practices of other countries. In a submission to the USTR, the Pharmaceutical Research and Manufacturers of America (PhRMA, the U.S. lobby organization that represents the major pharmaceutical companies) made a series of complaints about “unfair and non-reciprocal [Canadian] trade practices.” 

PhRMA alleged that Canadian protection for regulatory data is not comparable to that offered in the U.S., that Canadian effective patent enforcement is weak, that Canada does not provide sufficient patent term restoration to compensate companies for lengthy delays in the development and regulatory approval processes, and that the method Canada uses to ensure the price of new patented drugs is not excessive is dysfunctional. The U.S. lobby groups also alleged that the system of health technology assessment that Canada uses requires excessive discounts from manufacturers, and the time between when companies file a new drug application and when the drug is publicly funded is prolonged due to bureaucratic barriers. 

The USTR report noted that “stakeholders have raised concerns on the limited duration, eligibility, and scope of [patent] protection in Canada’s system.” Subsequently, PhRMA’s submission to the USTR consultation on CUSMA demonstrated that the industry sees the review as an opportunity to strengthen IP protection and enforcement in Canada.

One of the benefits of a pharmacare plan that covers the entire Canadian population is that it confers single-buyer power on government buyers, be they the federal government alone or a combination of federal/provincial/territorial governments. “Monopsony buying power,” as it is known, is one of the main reasons why Australian prices for patented medicines are only 71 per cent of Canadian prices. 

However, ever since it started to be seriously proposed, pharmacare has come under attacks from the pharmaceutical industry, the insurance industry and their ideological allies like the Fraser Institute and other free market groups. Innovative Medicines Canada, the pharmaceutical industry lobby group in Canada, is pushing for a fill in the gaps model which would provide coverage for people who don’t have drug insurance but leave the system otherwise untouched. 

GreenShield, a member of the Canadian Life and Health Insurance Association (CLHIA), is helping to lead the insurance industry charge against pharmacare. In July 2023 it announced a pilot program that offered up to $1,000 in drug coverage to low-income Canadians who did not have public or private prescription drug insurance. 

In making the announcement, GreenShield’s chief executive Zahid Salman repeated the myth that 97 per cent of Canadians already have coverage. That’s theoretically true, but doesn’t reflect what happens in the real world. In Nova Scotia if you pay anything less than 25 per cent of your gross family income for drugs, there’s no public coverage. In Manitoba, if your gross family income is above $75,000 annually you need to pay 7.59 per cent of your income before you qualify for coverage. Earn $75,000 and pay under $5,693 and there’s no coverage. According to reporting in The Breach, Denis Ricard, chair of the CLHIA board claimed that “A fully one-payer national pharmacare is going to be a disaster for this country”. 

Joining the battle against pharmacare was Brett Skinner, the CEO of the free market Canadian Health Policy Institute. Skinner’s message was that a national government-run drug insurance program was not necessary and would be bad for patients and costly for taxpayers. He argued that if Canadians were faced with high deductibles there were provincial programs to deal with them. 

The federal government’s recent policy changes have helped these attacks on pharmacare. At present only three provinces (British Columbia, Manitoba and Prince Edward Island) and the Yukon have signed deals with the federal government for a very truncated form of pharmacare that covers only contraceptives and diabetes products. While Prime Minister Carney himself has equivocated about whether there are plans to include other provinces and territories, Minister of Health, Marjorie Michel, has been more blunt and said that there are no new deals in the works. A report by an expert committee, commissioned by the Liberal government under Justin Trudeau, called for federal funding for a package of essential medicines that would cover all Canadian residents, Michel dismissed the report saying that it wasn’t binding on the government. 

Canada is facing a multiprong threat to its ability to ensure that prescription drugs are available to the population at affordable prices. Those threats are both internal and external—and, if policymakers do not counter them, will mean that many more people will be forced to choose between keeping themselves healthy and being able to pay for food and housing. 

The rightward shift of the federal government has created new obstacles to opposing the threats. However, there is resistance. The Canadian Health Coalition will be lobbying for pharmacare and drug prices on Parliament Hill in February 2026. Individuals and groups should mount a strong push to make drug prices and pharmacare a major issue in the NDP leadership race. 

As Tommy Douglas once said, “Courage, my friends; ’tis not too late to build a better world.”

Parts of this article have previously been published by the Canadian Centre for Policy Alternatives and The Conversation. Deborah Gleeson, Brigitte Tenni and Ron Labonté all contributed to the section on CUSMA renegotiations.

Read the whole story
sarcozona
19 hours ago
reply
Epiphyte City
Share this story
Delete

Nanaimo city council votes down proposed drug policy letter

1 Share
An incredulous Nanaimo City Councillor Hilary Eastmure responds to a comment from Nanaimo Mayor Leonard Krog during a debate on provincial drug policy during the Dec. 1, 2025 city council meeting. Screenshot courtesy of the City of Nanaimo.
Nanaimo City Councillor Hilary Eastmure responds to a comment from Nanaimo Mayor Leonard Krog during a debate on provincial drug policy during the Dec. 1 city council meeting. Screenshot courtesy of the City of Nanaimo.
Nanaimo City Councillor Hilary Eastmure responds to a comment from Nanaimo Mayor Leonard Krog during a debate on provincial drug policy during the Dec. 1, 2025 city council meeting. Screenshot courtesy of the City of Nanaimo.

Following presentations by Sarah Lovegrove, vice president of the Harm Reduction Nurses Association, and Beverly Planes, who spoke about her lived experience with substance use, Coun. Ian Thope made a motion to send a letter to the province around its drug policy.

The motion read: “That Council send a letter to the provincial government asking it to reexamine its philosophy regarding the ongoing drug addiction crisis and resulting mental health and street disorder issues – stating that the current policy of decriminalization and enabling drug use at consumption sites is failing to provide effective long term solutions for either those with addictions, or for neighbourhoods impacted by the problem.”

Lovegrove didn’t mince words in her presentation saying that “politicized arguments like this motion driven by nimbyism and colonial privilege, undermine science, threaten public health and jeopardize the health and safety of our entire community.”

Planes, a mother of four, spoke to council about her lived experience and advocacy as a person who used drugs. 

“This motion is not about reexamining philosophy, it’s about how many funerals we are prepared to accept,” she said. 

Planes told council that every overdose reversed in an Overdose Prevention Site was “one less emergency call, one less crew doing CPR in a stairwell, and one less body in a bag.”

Removing the Overdose Prevention Site, Planes added, would not protect anyone but just shift the burden of responding to overdoses to the fire department. 

Thorpe said his motion was “not directly about the consumption site, to me that is a symbol of the government policy after 12 years of the opioid crisis that needs to be reexamined.” 

Thorpe added that the “root of the problem is substance abuse and drug addiction causing mental illness and resulting social disorder.”

Coun. Ben Geselbracht said that the motion was “throwing the overdose prevention site unnecessarily under the bus,” adding that overdose prevention sites were never meant to be long-term solutions to addiction.

“Their purpose is simple and urgent,” he said. “To prevent death.” 

Coun. Janice Perrino said that she has lost a family member to the overdose crisis and that while overdose prevention sites keep people alive, she supported Thorpe’s motion because “we are failing to provide effective long-term solutions for those with addictions.” 

Coun. Paul Manly told council that his cousin had died of a toxic drug overdose and spoke about the four pillars approach advocated by former Vancouver Mayor Larry Campbell that included harm reduction, education, treatment and enforcement. 

“Right now we’re standing on a pogo stick where we focused on harm reduction,” he said. 

Manly said what’s needed is a housing first approach for people facing homelessness and addiction and the focus of the motion did not reflect that. 

Mayor Leonard Krog supported Thorpe’s motion, acknowledging that the Overdose Prevention Site “does save lives but saving those lives for what purpose?”

“If there’s no treatment available, if we are not putting money at the other end, if we are not providing housing and supports, and if we are not prepared as a society to stop pussyfooting around and put people in secure, involuntary care in far greater numbers than has been suggested.” 

Coun. Hilary Eastmure said she was appalled by the mayor’s statement. 

“To what end?” she asked. “To save their lives.”

Eastmure said that people who use drugs deserve treatment and access to housing.

“Housing is health care and I think this is a big thing that all of this discussion is coming down to, and it’s just not reflected in this motion. 

The motion to send the letter failed 3-5 with Mayor Krog, and councillors Thorpe and Perrino voting for it and councillors Geselbracht, Eastmure, Hemmens, Brown and Manly voting against it. Coun. Sheryl Armstrong, who attended the meeting electronically, was absent for the vote. 

The post Nanaimo city council votes down proposed drug policy letter appeared first on The Discourse..

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

Google Starts Sharing All Your Text Messages With Your Employer

1 Share
Getty

Updated on Dec. 2 with Google’s response to the furor around this update.

Microsoft triggered a viral furor when it revealed a Teams update to tell your company when you’re not at work. Now Google has done the same. Forget end-to-end encryption. A new Android update means your RCS and SMS texts are no longer private.

As reported by Android Authority, “Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

This applies to work-managed devices and doesn’t affect personal devices. And in certain regulated industries it just adds RCS archiving to existing SMS archiving. But employees in regular organizations view texting as different to emailing, especially given the expectations around end-to-end encryption. That’s no longer the case.

ForbesFBI Warns Gmail And Outlook Users—Do Not Click On These Emails

This underlines the widespread misunderstanding of end-to-end encryption. The security protects your messages when they’re being sent, but once they’re on your phone, they’re decrypted and available to anyone controlling the device.

Google says this is “a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

Suddenly, the perk of being given a phone at work is not as good as it might seem. While employees have long been aware of the risks in over-sharing on email — a woefully insecure technology that is easy for employers to monitor, texting has been seen as different. And this isn’t just for regulated industries. All organizations can play along.

Google says “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

In response to the furor around this update, Google told me "this update does not change or impact the privacy of personal devices. This is an optional feature for enterprise-managed work phones in regulated industries where employees are already notified that their communications are archived for compliance reasons.

There has long been a concern that employees have been turning to shadow IT systems to communicate with colleagues — WhatsApp and Signal in particular. This latest update won’t help make that situation any better.

There have been questions around whether Google’s new update means other messaging app content can also be archived and shared with employers, specifically secure apps including WhatsApp and Signal. The answer is no.

Forbes‘Disaster’—iPhone And Android VPN Ban ‘Actually Happening’

SMS and now RCS messaging is built into the phone’s OS itself, handled by Android (or iOS). Over-the-top platforms are not. They control their encryption and decryption. Their databases can be included in a general phone archive, but don’t need to be.

This is specific to general texting. “Previously,” Google says, “employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Meanwhile, if you have a work managed Android phone, watch for the message that warns your texts are no longer as private as they were.

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories