As an English composition instructor, I’m prone to doomscrolling articles (written primarily by other English composition instructors) about the uses, advantages and dangers of large language models in college classrooms. I think my colleagues, focused on concerns about plagiarism, policing and the tenability of our own employment (all pressing issues in their own rights), may be ignoring the greater threat that text-generation technology poses to our democratic institutions, the judgment of our electorate and the competence of our workforce.
In one of the articles I found myself scrolling, John Villasenor, writing for the Brookings Institution, suggested that LLMs would lead to “the democratization of good writing.” I was surprised to see that description. The ChatGPT-produced assignments I see on a weekly basis are rarely mistakable for “good.”
More to the point, research suggests that uniform style and content will not produce a level playing field of competent writers but, more likely, a ceiling of barely capable thinkers. In a study published by Nature Human Behaviour, researchers discovered that “reliance on ChatGPT … reduces the diversity of ideas in a pool of ideas,” to the degree that “94 percent of ideas from those who used ChatGPT ‘shared overlapping concepts.’”
Academics like Vered Shwartz at the University of British Columbia have also raised concerns that if North American models “assume the set of values and norms associated with western or North American culture, their information for and about people from other cultures might be inaccurate and discriminatory.”
A diversity of perspective, experience, talent and know-how are required to run and maintain a healthy democratic society. That civic diversity cannot be replicated by machines, and it would be severely damaged by voter rolls consisting of former students educated in the art of outsourcing their mental faculties to chat bots.
AI proponents are quick to point out historical instances of educators running about like headless chickens at the inventions of keyboards, calculators, pen and ink. I would go further: Socrates opposed the act of writing itself, which he believed would “introduce forgetfulness into the soul of those who learn it.”
We remember this quote because Plato wrote it down. It is a fallacy to conflate those examples with the full replacement (sometimes called “assistance”) of human thought by LLMs. Pens, keyboards and even Gutenberg’s printing press democratized writing in that they made it simpler for a greater number of human beings to convey themselves. In contrast, “AI” technology does not make writing easier for writers: At best it makes them readers—at worst, copy-pasters. LLMs pull words from data centers filled with the ideas of other writers, whose work is to a large degree not credited or paid for (even children will tell you that’s called theft). The result is akin to regurgitated vomit.
To create this essay, I applied Microsoft Word’s red squiggly lines to spot my misspellings. I’ve always been a poor speller (ask my middle school teachers), but the words I produce, the mistakes I make, are still my own, and the reason I make them is tied to my human experience as a communicator. Whether I learn from those mistakes or simply press “fix” and doom myself to repeat them is a conscious choice I make every time I write.
Of course, the choices don’t end there. I also decided how to approach the topic; what references to pull; how to order my paragraphs (both before and after I wrote them); what idioms, metaphors and introductory language to use; where to place hooks and callbacks; what to title the piece; and how to utilize grammar and punctuation to express my sassy indignation. These are vital skills for students to practice, not because they’re required in every profession, but because they emphasize executive function and cognitive reasoning. Writers are responsible for what they write, speakers for what they say, leaders for what they decide and voters for whom they elect. This in and of itself is reason enough to teach actual writing.
Another common argument of writers who unironically propose supplanting their own perspectives with generative AI summaries is that the traditional method of teaching writing caters to what Villasenor calls an “inherently elitist” system. To their credit, this is true. In his guide Writing With Power (Oxford, 1998), the esteemed rhetoric and composition professor Peter Elbow, who passed away earlier this year, explained,
Grammar is glamour … the two words just started out as two pronunciations of the same word … If you knew grammar you were special … But now, with respect to grammar, you are only special if you lack it. Writing without errors doesn’t make you anything, but writing with errors …makes you a hick, a boob, a bumpkin.”
The fact that we have raised the bar for ourselves is a sign of intellectual progress. Yes, gaps continue to exist (I taught ESL for several years), but I wouldn’t be so quick to concede the higher ground of achievement. Besides, while knowing one’s split infinitives and dangling modifiers is not a prerequisite for civic engagement, an innate, perhaps unconscious understanding of collective grammar norms is still required for reading, and this is true for every written language, in all of its forms (including memes and text messages).
We should be wary of the faux-populist sentiment behind arguments like this. A willful naïveté is required, I think, to suggest that the products of LLM parent companies (Google, Meta, OpenAI, Microsoft) foster equitable principles.
Even more dangerous is the tactic of deriding the abilities and wisdom of specialists and academics who seek rare and valuable knowledge. This has been and remains a frequent trick of authoritarians, which is why educators should be concerned by the visibly cozy relationship many of these tech companies have fostered with the Trump administration.
Both thematically and practically, this partnership, forged in campaign contributions, public appearances and the elimination of internal dissent (see: Jeff Bezos and The Washington Post), represents a threat to the university system. The Trump White House, intent on canceling research grants, deporting students and revoking accreditation, has very clearly demonstrated its opposition to “the elites” of academia. Ignorant consumers, like ignorant voters, are easier to manipulate, and ignorance thrives when education falters. Trump stated his preference clearly in a Nevada campaign event back in 2016:
We won with young. We won with old. We won with highly educated. We won with poorly educated. I love the poorly educated!”
According to a Pew Research Center analysis, Trump won the non-college-educated population by a 14-point margin (56 to 42 percent) in the 2024 presidential election, double his margin from 2016. The bad-actor alliance between Trump and big tech companies is no coincidence. They do not want you to write because they do not want you to think.
The falseness of LLM-generated content is a perfect fit for the reality-rejecting ethos of the Trump administration. Back in April, the White House was accused of outsourcing its world-altering tariff calculations to ChatGPT. In May, the Health and Human Services secretary, Robert F. Kennedy Jr., published a report that experts discovered was filled with what appeared to be AI-generated false citations.
These people have access to the greatest resources known to mankind. Why are they operating like bumfuzzled freshmen, submitting sloppy work at the 11th hour? Check the roster. The head of the Environmental Protection Agency has no experience working with the environment. The secretary of education is not an educator. The head of the Department of Housing and Urban Development is a former football player. The secretary of homeland security has never served in either an intelligence or defense capacity. RFK Jr. is a lawyer, not a doctor. Donald Trump is a reality TV star and convicted felon with six bankruptcies and numerous failed businesses to his name. If there is a better example of the Peter Principle in action on Planet Earth today, I don’t know it.
Jason Stanley, an expert on fascism previously at Yale University and now at the University of Toronto (he was spurred by the Trump administration’s actions to leave the country), identified “anti-intellectualism” as a signature feature in fascist movements.
As he writes, “Fascist politics seeks to undermine public discourse by attacking and devaluing education, expertise and language. Intelligent debate is impossible without an education with access to different perspectives, a respect for expertise when one’s own knowledge gives out, and a rich enough language to precisely describe reality.”
As Americans, we are in real danger of voluntarily submitting our cognitive faculties to LLMs for the sake of convenience, thereby weakening our ability to express truth and sort it from falsehood, a dilemma we already face with the advent of social media, extremist “news” networks and both foreign- and domestic-born disinformation. It is easier to give up than to resist. It is easier to delegate than to work hard. Aldous Huxley, author of Brave New World, knew this well. In a 1949 letter to George Orwell, he predicted,
Within the next generation, I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude.”
Our mothers gave us sage advice when we were children; they said, “Don’t take candy from strangers.” Like a creep in a white van, LLMs represent nebulous actors with nefarious purposes. In addition to stealing from countless unattributed human writers, companies like Meta and Google have demonstrated a careless—if not outright vampiric—interest in our personal data.
The availability of this technology is equally pervasive. There’s a van on every street corner and the driver says that I can save 30 minutes of work by outsourcing it to Gemini. Why shouldn’t I? Isn’t this a benefit to me as an employee? Game theory suggests otherwise—if all competitors offload their work in the same manner, none of them get ahead. In a recent working paper published by the National Bureau of Economic Research, economists Anders Humlum and Emilie Vestergaard found that “AI chatbots have had no significant impact on earnings or recorded hours in any occupation.”
Perhaps a more important question is this: Where do we imagine that 30 minutes goes? The rise of “AI” has yet to instigate a four-day workweek, and it is unlikely to do so. Since the Industrial Revolution—from black lung to Black Friday—American workers have learned that innovations in productivity rarely manifest as increased pay or shorter work hours.
In the United States, labor conditions have improved only when collective action demanded it from lawmakers. Such was the case with Roosevelt’s New Deal in the 1930s. On their own, the steam engine, spinning jenny, desktop computer and mobile phone failed to reduce the need for workers to be productive. Rather, they set new production standards, profiting company shareholders. Line graphs of U.S. worker salaries and CEO earnings versus inflation over time bear this out quite strikingly.
Big tech corporations are currently installing LLM apps in every corner of our daily lives, degrading the accuracy of search engines, making it harder to reach human customer service representatives and filling the internet with identical templates and “slop.” This may be profitable for wealthy investors, but it is not progress for average Americans. Moreover, as has been reported by Business Insider and Time, among many others, this rapid incursion represents a serious threat to the livelihood of employees across multiple sectors. Micha Kaufman, founder and CEO of Fiverr, a multinational company offering an “AI-enhanced” platform connecting freelancers and businesses, said back in April that “AI is coming for your jobs. Heck, it’s coming for my job too.”
I imagine Kaufman can afford to lose his job. Can you? In the short term, corporate bosses may favor compliant employees who hastily enter prompts into LLMs, a skill that might take as much as a few hours of guesswork to develop. But leadership requires competence. Leaders make decisions, carry responsibility and know what to do when systems go down. If a 50-foot wave comes careening over your boat, whom do you want at the helm—a captain with years of sailing experience or one who is very good at asking AI what to do?
Every time I enter a new classroom on the first day of the semester, I look across the desks and wonder which of my pupils will be a part of the next big thing. Which of them will enter government service? Which of them will teach in my place when I’m gone?
Educators should not relent in pushing their students beyond the bounds of incompetence. Our collective goal should remain as it always has been—to inspire students to struggle and learn from that struggle, thereby forging new, more capable identities. I want my students to make something of themselves. What a disservice I’d do if, instead, I taught them how to delegate their potential to a machine.