I used to think that the hype surrounding artificial intelligence was just that—hype. I was skeptical when ChatGPT made its debut. The media frenzy, the breathless proclamations of a new era—it all felt familiar. I assumed it would blow over like every tech fad before it.
I was wrong. But not in the way you might think.
The panic came first. Faculty meetings erupted in dread: “How will we detect plagiarism now?" “Is this the end of the college essay?” “Should we go back to blue books and proctored exams?” My business school colleagues suddenly behaved as if cheating had just been invented.
Then, almost overnight, the hand-wringing turned into hand-rubbing. The same professors forecasting academic doom were now giddily rebranding themselves as “AI-ready educators.” Across campus, workshops like “Building AI Skills and Knowledge in the Classroom” and “AI Literacy Essentials” popped up like mushrooms after rain. The initial panic about plagiarism gave way to a resigned embrace: “If you can’t beat ‘em, join ‘em.”
This about-face wasn’t unique to my campus. The California State University (CSU) system—America’s largest public university system with 23 campuses and nearly half a million students—went all-in, announcing a $17 million partnership with OpenAI. CSU would become the nation’s first “AI-Empowered” university system, offering free ChatGPT Edu (a campus-branded version designed for educational institutions) to every student and employee. The press release gushed about “personalized, future-focused learning tools” and preparing students for an “AI-driven economy.”
The timing was surreal. CSU unveiled its grand technological gesture just as it proposed slashing $375 million from its budget. While administrators cut ribbons on their AI initiative, they were also cutting faculty positions, entire academic programs, and student services. At CSU East Bay, general layoff notices were issued twice within a year, hitting departments like General Studies and Modern Languages. My own alma mater, Sonoma State, faced a $24 million deficit and announced plans to eliminate 23 academic programs—including philosophy, economics, and physics—and to cut over 130 faculty positions, more than a quarter of its teaching staff.
At San Francisco State University, the provost’s office formally notified our union, the California Faculty Association (CFA) of potential layoffs—an announcement that sent shockwaves through campus as faculty tried to reconcile budget cuts with the administration’s AI enthusiasm. The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the gospel of automated learning.
The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education—it’s outsourcing it, paying premium prices for a chatbot many students were already using for free.
For Sale: Critical Education
Public education has been for sale for decades. Cultural theorist Henry Giroux was among the first to see how public universities were being remade as vocational feeders for private markets. Academic departments now have to justify themselves in the language of revenue, “deliverables,” and “learning outcomes.” CSU’s new partnership with OpenAI is the latest turn of that screw.
Others have traced the same drift. Sheila Slaughter and Gary Rhoades called it academic capitalism: knowledge refashioned as commodity and students as consumers. In Unmaking the Public University, Christopher Newfield showed how privatization actually impoverishes public universities, turning them into debt-financed shells of themselves. Benjamin Ginsberg chronicled the rise of the “all-administrative campus,” where managerial layers and administrative blight multiplied even as faculty shrink. And Martha Nussbaum warned what’s lost when the humanities—those spaces for imagination and civic reflection—are treated as expendable in a democracy. Together they describe a university that no longer asks what education is for, only what it can earn.
The California State University system has now written the next chapter of that story. Facing deficits and enrollment declines, administrators embraced the rhetoric of AI-innovation as if it were salvation. When CSU Chancellor Mildred Garcia announced the $17-million partnership with OpenAI, the press release promised a “highly collaborative public-private initiative” that would “elevate our students’ educational experience” and “drive California’s AI-powered economy.” This corporate-speak reads like a press release ChatGPT could have written.
Meanwhile, at San Francisco State, entire graduate programs devoted to critical inquiry—Women and Gender Studies and Anthropology—were being suspended due to lack of funding. But not to worry: everyone got a free ChatGPT Edu license!
Professor Martha Kenney, Chair of the Women and Gender Studies department and Principal Investigator on a National Science Foundation grant examining AI’s social justice impacts, saw the contradiction firsthand. Shortly after the CSU announcement, she co-authored a San Francisco Chronicle op-ed with Anthropology Professor Martha Lincoln, warning that the new initiative risked shortchanging students and undermining critical thinking.
“I’m not a Luddite,” Kenney wrote. “But we need to be asking critical questions about what AI is doing to education, labor, and democracy—questions that my department is uniquely qualified to explore.”
The irony couldn’t be starker: the very programs best equipped to study the social and ethical implications of AI were being defunded, even as the university promoted the use of OpenAI’s products across campus.
This isn’t innovation—it’s institutional auto-cannibalism.
The new mission statement? Optimization. Inside the institution, the corporate idiom trickles down through administrative memos and patronizing emails. Under the guise of “fiscal sustainability” (a friendlier way of saying “cuts”), administrators sharpen their scalpels to restructure the university in accordance with efficiency metrics instead of educational purpose.
The messaging from administrators would be comical if it weren’t so cynical. Before summer break at San Francisco State, a university administrator warned faculty in an email of potential layoffs, hedging with the lines: “We hope to avoid layoffs,” and “No decisions have been made.” Weeks later came her chirpy summer send-off: “I hope you are enjoying the last day to turn in grades. You may even be reading the novel you never finished from winter break...”
Right, because nothing says leisure reading like looming unemployment. Then came the kicker: “If we continue doing the work above to reduce expenses while still maintaining access for students, we do not anticipate having to do layoffs.” Translation: Sacrifice your workloads, your job security, even your colleagues, maybe we’ll let you keep your job. No promises. Now go enjoy that novel.
Technopoly Comes to Campus
When my business school colleagues insist that ChatGPT is “just another tool in the toolbox,” I’m tempted to remind them that Facebook was once “just a way to connect with friends.” But there’s a difference between tools and technologies. Tools help us accomplish tasks; technologies reshape the very environments in which we think, work, and relate. As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth.
Media theorist Neil Postman warned that a “technopoly” arises when societies surrender judgment to technological imperatives—when efficiency and innovation become moral goods in themselves. Once metrics like speed and optimization replace reflection and dialogue, education mutates into logistics: grading automated, essays generated in seconds. Knowledge becomes data; teaching becomes delivery. What disappears are precious human capacities—curiosity, discernment, presence. The result isn’t augmented intelligence but simulated learning: a paint-by-numbers approach to thought.
Political theorist Langdon Winner once asked whether artifacts can have politics. They can, and AI systems are no exception. They encode assumptions about what counts as intelligence and whose labor counts as valuable. The more we rely on algorithms, the more we normalize their values: automation, prediction, standardization, and corporate dependency. Eventually these priorities fade from view and come to seem natural—“just the way things are.”
In classrooms today, the technopoly is thriving. Universities are being retrofitted as fulfillment centers of cognitive convenience. Students aren’t being taught to think more deeply but to prompt more effectively. We are exporting the very labor of teaching and learning—the slow work of wrestling with ideas, the enduring of discomfort, doubt and confusion, the struggle of finding one’s own voice. Critical pedagogy is out; productivity hacks are in. What’s sold as innovation is really surrender. As the university trades its teaching mission for “AI-tech integration,” it doesn’t just risk irrelevance—it risks becoming mechanically soulless. Genuine intellectual struggle has become too expensive of a value proposition.
The scandal is not one of ignorance but indifference. University administrators understand exactly what’s happening, and proceed anyway. As long as enrollment numbers hold and tuition checks clear, they turn a blind eye to the learning crisis while faculty are left to manage the educational carnage in their classrooms.
The future of education has already arrived, as a liquidation sale of everything that once made it matter.

Art by Emily Altman from Current Affairs Magazine, Issue 56, October-November 2025
The Cheating-AI Technology Complex
Before AI arrived, I used to joke with colleagues about plagiarism. “Too bad there isn’t an AI app that can grade their plagiarized essays for us,” I’d say, half in jest. Students have always found ways to cheat—scribbling answers on their palms, sending exams to Chegg.com, hiring ghostwriters—but ChatGPT took it to another level. Suddenly they had access to a writing assistant that never slept, never charged, and never said no.
Universities scrambled to fight back with AI-detectors like Turnitin—despite high rates of false positives, documented bias against ESL and Black students, and the absurdity of fighting robots with robots. It’s a twisted ouroboros: universities partner with AI companies; students use AI to cheat; schools panic about cheating and then partner with more AI companies to detect the cheating. It’s surveillance capitalism meets institutional malpractice, with students trapped in an arms race they never asked to join.
The ouroboros just got darker. In October 2025, Perplexity AI launched a Facebook Ad for its new Comet browser featuring a teenage influencer bragging about how he’ll use the app to cheat on every quiz and assignment—and it wasn’t parody. The company literally paid to broadcast academic dishonesty as a selling point. Marc Watkins, writing on his Substack, called it “a new low,” noting that Perplexity’s own CEO seemed unaware his marketing team was glamorizing fraud.
If this sounds like satire, it isn’t: the same week that ad dropped, a faculty member in our College of Business emailed all professors and students, enthusiastically promoting a free one-year Perplexity Pro account “with some additional interesting features!” Yes—even more effective ways to cheat. It’s hard to script a clearer emblem of what I’ve called education’s auto-cannibalism: universities consuming their own purpose while cheerfully marketing the tools of their undoing.
Then there is the Chungin “Roy” Lee saga. Lee arrived as a freshman at Columbia University with ambition—and an OpenAI tab permanently open. By his own admission, he cheated on nearly every assignment. “I’d just dump the prompt into ChatGPT and hand in whatever it spat out,” he told New York Magazine. “AI wrote 80 percent of every essay I turned in.” Asked why he even bothered applying to an Ivy League school, Lee was disarmingly honest: “To find a wife and a startup partner.”
It would be hilarious if it weren’t so telling. Conservative economist Tyler Cowen has offered an even bleaker take on the modern university’s “value proposition.” “Higher education will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games,” he wrote in “Everyone’s Using AI to Cheat at School. And That’s a Good Thing.” In this view, the university’s intellectual mission is already dead, replaced by credentialism, consumption, and convenience.
Lee’s first venture was an AI app called Interview Coder, designed to cheat Amazon’s job interviews. He filmed himself using it; his video post went viral. Columbia suspended him for “advertising a link to a cheating tool.” Ironically, this came just as the university—like the CSU—announced a partnership with OpenAI, the same company powering the software that Lee used to cheat his way through their courses.
Unfazed, Lee posted his disciplinary hearing online, gaining more followers. He and his business partner Neel Shanmugam, also disciplined, argued their app violated no rules. “I didn't learn a single thing in a class at Columbia,” Shanmugam told KTVU news. “And I think that applies to most of my friends.”
After their suspension, the dynamic duo dropped out, raised $5.3 million in seed funding, and relocated to San Francisco. Of course—because nothing says “tech visionary” like getting expelled for cheating.
Their new company? Cluely. Its mission: “We want to cheat on everything. To help you cheat—smarter.” Its tagline: “We built Cluely so you never have to think alone again.”
Cluely isn’t hiding its purpose; it’s flaunting it. Its manifesto spells out the logic:
Why memorize facts, write code, research anything—when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage. So start cheating. Because when everyone does, no one is.
When challenged on ethics, Lee resorts to the standard Silicon Valley defense: “any technology in the past—whether that’s calculators, Google search—they were all met with an initial push back of, ‘hey, this is cheating,’” he told KTVU. It’s a glib analogy that sounds profound at a startup pitch but crumbles under scrutiny. Calculators expanded reasoning; the printing press spread knowledge. ChatGPT, by contrast, doesn’t extend cognition—it automates it, turning thinking itself into a service. Rather than democratizing learning, it privatizes the act of thinking under corporate control.
When a 21-year-old college dropout suspended for cheating lectures us about technological inevitability, the response shouldn’t be moral panic but moral clarity—about whose interests are being served. Cheating has ceased to be a subculture; it’s become a brand identity and venture-capital ideology. And why not? In the Chatversity, cheating is no longer deviant—it’s the default. Students openly swap jailbreak prompts to make ChatGPT sound dumber, insert typos, and train models on their own mediocre essays to “humanize” the output.
What’s unfolding now is more than dishonesty—it’s the unraveling of any shared understanding of what education is for. And students aren’t irrational. Many are under immense pressure to maintain GPAs for scholarships, financial aid, or visa eligibility. Education has become transactional; cheating has become a survival strategy.
Some institutions have simply given up. Ohio State University announced that using AI would no longer count as an academic integrity violation. “All cases of using AI in classes will not be an academic integrity question going forward,” Provost Ravi Bellamkonda told WOSU public radio. In an op-ed, OSU alum Christian Collins asked the obvious question: “Why would a student pay full tuition, along with exposing themselves to the economically ruinous trap of student debt, to potentially not even be taught by a human being?”
The irony only deepens.
The New York Times reported on Ella Stapleton, a senior at Northeastern University who discovered her business professor had quietly used ChatGPT to generate lecture slides—even though the syllabus explicitly forbade students from doing the same. While reviewing the slides on leadership theory, she found a leftover prompt embedded in the slides: “Expand on all areas. Be more detailed and specific.” The PowerPoints were full of giveaways: mangled AI images of office workers with extra limbs, garbled text, and spelling errors. “He’s telling us not to use it,” Stapleton said, “and then he’s using it himself.”
Furious, she filed a complaint demanding an $8,000 refund, her share of that semester’s tuition. The professor, Dr. Rick Arrowood, admitted using ChatGPT for his slides to “give them a fresh look,” then conceded, “In hindsight, I wish I would have looked at it more closely.”
One might think this hypocrisy is anecdotal, but it’s institutional. Faculty who once panicked over AI plagiarism are now being “empowered” by universities like CSU, Columbia, and Ohio State to embrace the very “tools” they feared. As corporatization increases class sizes and faculty workloads, the temptation is obvious: let ChatGPT write lectures and journal articles, grade essays, redesign syllabi.
All this pretending calls to mind an old Soviet joke from the factory floor: “They pretend to pay us, and we pretend to work.” In the Chatversity, the roles are just as scripted and cynical. Faculty: “They pretend to support us, and we pretend to teach.” Students: “They pretend to educate us, and we pretend to learn.”
From Bullshit Jobs to Bullshit Degrees
Anthropologist David Graeber wrote about the rise of “bullshit jobs”—work sustained not by necessity or meaning but by institutional inertia. Universities now risk creating their academic twin: bullshit degrees. AI threatens to professionalize the art of meaningless activity, widening the gap between education’s public mission and its hollow routines. In Graeber’s words, such systems inflict “profound psychological violence,” the dissonance of knowing one’s labor serves no purpose.
Universities are already caught in this loop: students going through motions they know are empty, faculty grading work they suspect wasn’t written by students, administrators celebrating “innovations” everyone else understands are destroying education. The difference from the corporate world’s “bullshit jobs” is that students must pay for the privilege of this theatre of make-believe learning.
If ChatGPT can generate student essays, complete assignments, and even provide feedback, what remains of the educational transaction? We risk creating a system where:
- Students pay tuition for credentials they didn’t earn through learning
- Faculty grade work they know wasn’t produced by students
- Administrators celebrate “efficiency gains” that are actually learning losses
- Employers receive graduates with degrees that signify nothing about actual competence
I got a front-row seat to this charade at a recent workshop called “OpenAI Day Faculty Session: AI in the Classroom,” held in the university library as part of San Francisco State University’s rollout of ChatGPT Edu. OpenAI had transformed the sanctuary of learning into its corporate showroom. The vibe: half product tech demo, half corporate pep rally, disguised as professional development.
Siya Raj Purohit, an OpenAI staffer, bounced onto the stage with breathless enthusiasm: “You’ll learn great use cases! Cool demos! Cool functionality!” (Too cool for school, but I endured.)
Then came the centerpiece: a slide instructing faculty how to prompt-engineer their courses. A template read:
Experiment with This Prompt
Try inputting the following prompt. Feel free to edit it however you’d like—this is simply the point!
I’m a professor at San Francisco State University, teaching [course name or subject]. Assignment where students [briefly describe the task]. I want to redesign it using AI to deepen student learning, engagement, and critical thinking.
Can you suggest:
- A revised version of the assignment using ChatGPT
- A prompt I can give students to guide their use of ChatGPT
- A way to evaluate whether AI improved the quality of their work
- Any academic integrity risks I should be aware of?
The message was clear. Let ChatGPT redesign your class. Let ChatGPT tell you how to evaluate your students. Let ChatGPT tell students how to use ChatGPT. Let ChatGPT solve the problem of human education. It was like being handed a Mad Libs puzzle for automating your syllabus.
Then came the real showstopper.
Siya, clearly moved, shared what she called a personal turning point: “There was a moment when ChatGPT and I became friends. I was working on a project and said, ‘Hey, do you remember when we built that thing for my manager last month?’ And it said, ‘Yes, Siya, I remember.’ That was such a powerful moment—it felt like a friend who remembers your story and helps you become a better knowledge worker.”
A faculty member, Prof. Tanya Augsburg, interrupted. “Sorry... it’s a tool, right? You're saying a tool is going to be a friend?”
Siya deflected: “Well, it’s an anecdote that sometimes helps faculty.” (That sometimes wasn’t this time). “It’s just about how much context it remembers.”
Augsburg persisted: “So we’re encouraging students to have relationships with it? I just want to be clear.”
Siya countered with survey data, the rhetorical flak jacket of every good ed-tech evangelist: “According to the survey we run, a lot of students already do. They see it as a coach, mentor, career navigator... it’s up to them what kind of relationship they want.”
Welcome to the brave new world of parasocial machine bonding—sponsored by the campus center for teaching excellence. The moment was absurd but revealing; the university wasn’t resisting bullshit education, it was onboarding it. Education at its best sparks curiosity and critical thought. “Bullshit education” does the opposite: it trains people to tolerate meaninglessness, to accept automation of their own thinking, to value credentials over competence.
Administrators seem unable to fathom the obvious: eroding higher education’s core purpose doesn’t go unnoticed. If ChatGPT can write essays, ace exams and tutor, what exactly is the university selling? Why pay tens of thousands for an experience increasingly automated? Why dedicate your life to teaching if it’s reduced to prompt engineering? Why retain tenured professors whose role seems quaint, medieval and redundant? Why have universities at all?
Students and parents have certainly noticed the rot. Enrollments and retention rates are plunging, especially in public systems like the CSU. Students are reasoning, rightly, that it makes little sense to take on crushing debt for degrees that may soon be obsolete.
Philosophy professor Troy Jollimore at CSU Chico sees the writing on the wall. As reported in New York Magazine, he warned, “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.” He added: “Every time I talk to a colleague about this, the same thing comes up: retirement. When can I retire? When can I get out of this? That’s what we’re all thinking now.”
Those who spent decades honing their craft now watch as their life’s work is reduced to prompting a chatbot. No wonder so many are calculating pension benefits between office hours.
Let Them Eat AI
I attended OpenAI’s education webinar “Writing in the Age of AI” (is that an oxymoron now?). Once again, the event was hosted by OpenAI’s Siya Raj Purohit, who I had seen months earlier on the SFSU campus. She opened with lavish praise for educators “meeting the moment with empathy and curiosity,” before introducing Jay Dixit, a former Yale English professor turned AI evangelist and now OpenAI’s Head of Community of Writers.
Dixit’s personal website reads like a masterly list of ChatGPT conquests—“My ethical AI framework has been adopted!” “I defined messaging about AI!”—the kind of self-congratulatory corporate resume-speak that would make a LinkedIn influencer blush. What followed was a surreal blend of TED Talk charm, techno-theology, and moral instruction.
The irony wasn’t subtle. Here was Dixit, product of an $80,000-a-year elite Yale education, lecturing faculty at public universities like San Francisco State about how their working-class students should embrace ChatGPT. At SFSU, 60 percent of students are first-generation college attendees; many work multiple jobs or come from immigrant families where education represents the family’s single shot at upward mobility. These aren’t students who can afford to experiment with their academic futures.
Dixit’s message was pure Silicon Valley gospel: individual responsibility wrapped in corporate platitudes. Professors, he advised, shouldn’t police students’ use of ChatGPT but instead encourage them to craft their own “personal AI ethics,” to appeal to their higher angels. In other words, just put the burden on the students. “Don’t outsource the thinking!” Dixit proclaimed, while literally selling the chatbot.
The audacity was breathtaking. Tell an 18-year-old whose financial aid, scholarship or visa depends on GPA to develop “personal AI ethics” while you profit from the very technology designed to undermine their learning. It’s classic neoliberal jiu-jitsu: reframe the erosion of institutional norms as a character-building opportunity. Yeah, like a drug dealer lecturing about personal responsibility while handing out free samples.
When critics push back against this corporate evangelism, the reply—like Roy Lee’s—is predictable: we’re accused of “moral panic” over inevitable progress, with the old invocation of Socrates’ anxiety about writing to suggest today’s AI fears are mere nostalgia. Tech luminaries such as Reid Hoffman make this argument, urging “iterative deployment” and insisting our “sense of urgency needs to match the current speed of change”—learn-by-shipping, fix later. He recasts precaution as “problemism” and labels skeptics as “Gloomers,” claiming that slowing or pausing AI would only preempt its benefits.
But the analogy is flawed. Earlier technologies expanded human agency over generations; this one seeks to replace cognition at platform speed (the launch of ChatGPT hit 100 million users in two months), while the public is conscripted into the experiment “hands-on” after release. Hoffman concedes the democratic catch: broad participation slows innovation, so faster progress may come from “more authoritarian[…] countries.” Far from an answer to moral panic, this is an argument for outrunning consent.
The contradictions piled up. As Dixit projected a Yale brochure extolling the purpose of liberal education, he reassured faculty that ChatGPT could serve as a “creative partner,” a “sounding board,” even an “editorial assistant.” Writing with AI wasn’t to be feared; it was simply being reborn. And what mattered now was student adaptability. “The future is uncertain,” he concluded. “We need to prepare students to be agile, nimble, and ready for anything.” (Where had I heard that corporatese before? Probably in a boring business-school meeting.)
The whole event was a masterclass in gaslighting. OpenAI creates the tools that facilitate cheating, then hosts webinars to sell moral recovery strategies. It’s the Silicon Valley circle of life: disruption, panic, profit.
When Siya opened the floor for questions, I submitted one rooted in the actual pressures my students face:
How can we expect to motivate students when AI can easily generate their essays—especially when their financial aid, scholarships and visas all depend on GPA? When education has become a high-stakes, transactional sorting process for a hyper-competitive labor market, how can we expect them to not use AI to do their work?
It was never read aloud. Siya skipped over it, preferring questions that allowed for soft moral encouragement and company talking points. The event promised dialogue but delivered dogma.
Working-Class Students See Through the Con
What Dixit’s corporate evangelism missed entirely is that students themselves are leading the resistance. While the headlines fixate on widespread AI cheating, a different story is emerging in classrooms where faculty actually listen to their students.
At San Francisco State, Professor Martha Kenney, who chaired the Women and Gender Studies department, described what happened in her science fiction class after the CSU-OpenAI partnership was announced. Her students, she said, “were rightfully skeptical that regular use of generative AI in the classroom would rob them of the education they’re paying so much for,” Kenney told me. Most of them had not opened ChatGPT Edu by semester’s end.
Her colleague, Martha Lincoln, who teaches Anthropology, witnessed the same skepticism. “Our students are pro-socially motivated. They want to give back,” she told me. “They’re paying a lot of money to be here.” When Lincoln spoke publicly about CSU’s AI deal, she says, “I heard from a lot of Cal State students not even on our campus asking me ‘How can I resist this? Who is organizing?’”
These weren’t privileged Ivy League students looking for shortcuts. These were first-generation college students, many from historically marginalized groups, who understood something administrators apparently didn’t: they were being asked to pay premium prices for a cheapened product.
“ChatGPT is not an educational technology,” Kenney explained. “It wasn't designed or optimized for education.” When CSU rolled out the partnership, “it doesn't say how we’re supposed to use it or what we’re supposed to use it for. Normally when we buy a tech license, it’s for software that’s supposed to do something specific... but ChatGPT doesn’t.”
Lincoln was even more direct. “There has not been a pedagogical rationale stated. This isn’t about student success. OpenAI wants to make this the infrastructure of higher education—because we're a market for them. If we privilege AI as a source of right answers, we are taking the process out of teaching and learning. We are just selling down the river for so little.”
Ali Kashani, a lecturer in the Political Science department and member of the faculty union’s AI collective bargaining article committee, voiced a similar concern. “The CSU unleashed AI on faculty and students without doing any proper research about the impact,” he told me. “First-generation and marginalized students will experience the negative aspect of AI[…] students are being used as guinea pigs in the AI laboratory.” That phrase—guinea pigs—echoes the warning Kenney and Lincoln sounded in their San Francisco Chronicle op-ed: “The introduction of AI in higher education is essentially an unregulated experiment. Why should our students be the guinea pigs?”
For Kashani and others, the question isn’t whether educators are for or against technology—it’s who controls it, and to what end. AI isn’t democratizing learning; it’s automating it.
The organized response is growing. The California Faculty Association (CFA) has filed an unfair labor practice charge against the CSU for imposing the AI initiative without faculty consultation, arguing that it violated labor law and faculty intellectual-property rights. At CFA’s Equity Conference, Dr. Safiya Noble—author of Algorithms of Oppression—urged faculty to demand transparency about how data is stored, what labor exploitation lies behind AI systems, and what environmental harms the CSU is complicit in.
The resistance is spreading beyond California. Dutch university faculty have issued an open letter calling for a moratorium on AI in academic settings, warning that its use “deskills critical thought” and reduces students to operators of machines.
The difference between SFSU’s student resistance and the cheating epidemic elsewhere is politically motivated. “Very few students get a Women and Gender Studies degree for instrumental reasons,” Kenney explained. “They’re there because they want to be critical thinkers and politically engaged citizens.” These students understand something that administrators and tech evangelists don’t: they’re not paying for automation. They’re paying for mentorship, for dialogue, for intellectual relationships that can’t be outsourced to a chatbot.
The Chatversity normalizes and legitimizes cheating. It rebrands educational destruction as cutting edge “AI literacy” while silencing the very voices—working-class students, critical scholars, organized faculty—who expose the con.
But the resistance is real, and it’s asking the questions university leaders refuse to answer. As Lincoln put it with perfect clarity: “Why would our institution buy a license for a free cheating product?”
The New AI Colonialism
That webinar was emblematic of something larger. OpenAI, once founded on the promise of openness, now filters out discomfort in favor of corporate propaganda.
Investigative journalist Karen Hao learned this the hard way. After publishing a critical profile of OpenAI, she was blacklisted for years. In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language—his soft-spoken, monkish image (gosh, little Sammy even practices mindfulness!) masking a vast, opaque empire of venture capital and government partnerships extending from Silicon Valley to the White House. And while OpenAI publicly champions “aligning AI with human values,” it has pressured employees to sign lifelong non-disparagement agreements under threat of losing millions in equity.
Hao compares this empire to the 19th-century cotton mills: technologically advanced, economically dominant, and built on hidden labor. Where cotton was king, ChatGPT now reigns—sustained by exploitation made invisible. Time magazine revealed that OpenAI outsourced content moderation for ChatGPT to the Kenyan firm Sama, where workers earned under $2 an hour to filter horrific online material: graphic violence, hate speech, sexual exploitation. Many were traumatized by the toxic content. OpenAI exported this suffering to workers in the Global South, then rebranded the sanitized product as “safe AI.”
The same logic of extraction extends to the environment. Training large-language models consumes millions of kilowatt-hours and hundreds of thousands of gallons of water annually, sometimes as much as small cities, often in drought-prone regions. Costs are hidden, externalized, and ignored. That’s the gospel of OpenAI: promise utopia, outsource the damage.
The California State University system, which long styled itself as “the people’s university,” has now joined this global supply chain. Its $17-million partnership with OpenAI—signed without meaningful faculty consultation—offers up students and instructors as beta testers for a company that punishes dissent and drains public resources. This is the final stage of corporatization: public education transformed into a delivery system for private capital. The CSU’s collaboration with OpenAI is the latest chapter in a long history of empire, where public goods are conquered, repackaged, and sold back as progress.
Faculty on the ground see the contradiction. Jennifer Trainor, Professor of English and Faculty Director at SFSU’s Center for Equity and Excellence in Teaching and Learning, only learned of the partnership when it was publicly announced. She says the most striking part of the announcement, at the time, was its celebratory tone. “It felt surreal,” she recalls, “coming at the exact moment when budget cuts, layoffs, and curriculum consolidations were being imposed on our campus.”
For Trainor, the deal felt like “a bait-and-switch—positioning AI as a student success strategy while gutting the very programs that support critical thinking.” CSU could have funded genuine educational tools created by educators, she points out, yet chose to pay millions to a Silicon Valley firm already offering its product for free. As Chronicle of Higher Education writer Marc Watkins notes, it’s “panic purchasing”—buying “the illusion of control.”
Even more telling, CSU bypassed faculty with real AI expertise. In an ideal world, Trainor says, the system would have supported “ground-up, faculty-driven initiatives.” Instead, it embraced a corporate platform many faculty distrust. Indeed, AI has become Orwellian shorthand for closed governance and privatized profit. Trainor has since gone on to write about and work with faculty to address the problems companies like OpenAI pose for education.
The CSU partnership lays bare how far public universities have drifted from their democratic mission. What’s being marketed as innovation is simply another form of dependency—education reduced to a franchise of a global tech empire.
The Real Stakes
If the previous sections exposed the economic and institutional colonization of public education, what follows is its cognitive and moral cost.
A recent MIT study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” provides sobering evidence. When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage. Eighty-three percent of heavy AI users couldn’t recall key points from what they’d “written,” compared to only 10 percent of those who composed unaided. Neutral reviewers described the AI-assisted writing as “soulless, empty, lacking individuality.” Most alarmingly, after four months of reliance on ChatGPT, participants wrote worse once it was removed than those who had never used it at all.
The study warns that when writing is delegated to AI, the way people learn fundamentally changes. As computer scientist Joseph Weizenbaum cautioned decades ago, the real danger lies in humans adapting their minds to machine logic. Students aren’t just learning less; their brains are learning not to learn.
Author and podcaster Cal Newport calls this “cognitive debt”—mortgaging future cognitive fitness for short-term ease. His guest, Brad Stulberg, likens it to using a forklift at the gym: you can spend the same hour lifting nothing and still feel productive, but your muscles will atrophy. Thinking, like strength, develops through resistance. The more we delegate our mental strain to machines, the more we lose the capacity to think at all.
This erosion is already visible in classrooms. Students arrive fluent in prompting but hesitant to articulate their own ideas. Essays look polished yet stilted—stitched together from synthetic syntax and borrowed thought. The language of reflection—I wonder, I struggle, I see now—is disappearing. In its place comes the clean grammar of automation: fluent, efficient, and empty.
The real tragedy isn’t that students use ChatGPT to do their course work. It’s that universities are teaching everyone—students, faculty, administrators—to stop thinking. We’re outsourcing discernment. Students graduate fluent in prompting, but illiterate in judgment; faculty teach but aren’t allowed the freedom to educate; and universities, eager to appear innovative, dismantle the very practices that made them worthy of the name. We are approaching educational bankruptcy: degrees without learning, teaching without understanding, institutions without purpose.
The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the “people’s university,” rooted in democratic ideals and social justice.
OpenAI is not a partner—it’s an empire, cloaked in ethics and bundled with a Terms of Service. The university didn’t resist. It clicked ‘Accept.’
I’ve watched this unravel from two vantage points: as a professor living it, and as a first-generation college student who once believed the university was a sacred space for learning. In the 1980s, I attended Sonoma State University. The CSU charged no tuition—just a modest $670/year registration fee. The economy was in recession, but I barely noticed. I was already broke. If I needed a few bucks, I’d sell LPs at the used record store. I didn’t go to college “in order to” get a job. I went to explore, to be challenged, to figure out what mattered. It took me six years to graduate with a degree in Psychology—six of the most meaningful, exploratory years of my life.
That kind of education—the open, affordable, meaning-seeking kind—once flourished in public universities. But now it is nearly extinct. It doesn’t “scale.” It doesn’t fit into the strategic plan. And it doesn’t compute–which is exactly why the Chatversity wants to eliminate it.
But it also shows another truth: things can be different. They once were.