In Defense of the AI Essay
The end-of-term paper comes back riddled with em dashes and inexplicably bolded sentences. The man at the desk, a long-tenured teacher of ninth-grade English, unconsciously sucks in a breath. His eyes scan downward, and there it is: a bulleted list. He exhales slowly, a bit of hope leaving him by way of the expiration. He feels as if he almost shouldn’t be surprised; it’s the third such essay he has graded so far this cycle.
He vaguely recalls writing an essay on a similar topic when he was in secondary school. F. Scott Fitzgerald’s use of symbolism in The Great Gatsby: a bit of a tired topic to be sure, but surely worthy of some consideration? He lays his glasses on his desk, suddenly lost in reflection on times long past. What kind of student was he, 20 years ago? Not a model example, he would be the first to admit. Besides, that was why he became a teacher in the first place. Sure, his essay submission at that age may have been poorly written and trite, belying the most rudimentary understanding of the classic American novel. He may have even fluffed his prose to meet the word count. But AI? Would he really have been so averse to original thought, so unconcerned with appearances, as to shamelessly pass off the work of ChatGPT as his own?
No no, he thinks, that’s a boundary I wouldn’t have crossed. He blinks as he becomes aware of his moment of reverie. Grateful to have his crisis of integrity defused, he leans forward to resume grading. But as soon as his eyes meet the screen, a whirl of newfound doubt catches in his chest.
No, there’s no way I would have been so brazen. Besides, it’s not just dishonest, it’s plain wrong.
The second bullet point on the wretched list catches his attention. The Eyes of Dr. T.J. Eckleberg. That disembodied gaze, none the less piercing in its moral appraisal for the paint peeling off its neglected billboard home. At once, the teacher becomes aware of the eyes gazing back at him. Ghostly apparitions a screen’s length from his face, superimposed on posters as he quickly looks about his classroom, frantic. God’s detached judgement, emanating from his post overlooking The Valley of Ashes, turned upon him now. All-seeing, all-knowing. Eckleberg’s eyes, already knowing the answer, seem to ask, “Are you quite sure you wouldn’t?”
There’s been a flood of moral panic over the use of AI in education; every third post on Substack seems to be about how ChatGPT is ruining minds. Alarm over the prospect of kids not able to write simple paragraphs and college students not able to sit through popular novels has proliferated amongst parents, cultural commentators, and solemn English professors. “Is AI rotting kids’ brains?” is a provocative question and guarantees engagement in the current online climate. The problem is that it is, unfortunately, a dumb question.
Besides the tiredness of the “is [insert technology here] making kids dumber?” conversation, this particular iteration is especially irksome to me because it precludes much more interesting conversations about the future of schooling (which, coincidentally, I’m always thinking about) and the continued legitimacy of academic assignments which can be completed in fewer than 30 seconds by machines.
In my experience as both a student and school psychologist in public schools, writing assignments have borne striking resemblances to math. They began with a standard prompt for the class. There were right answers—at least it felt like it—and there were formulas for arriving at them. There was little personal writing and even less writing that felt personal—writing that felt voluntary and reflective, carried on the wings of intellectual exploration rather than adherence to a rubric. In this sense the writing and math we did felt very similar. The same prompts were doled out year after year, giving us the notion that nothing we wrote could possibly be new or interesting. So we stuck to the script, churning out the essays we thought our teachers would like, wishing we just knew what the right answer was.
I attended a competitive public high school in Silicon Valley. The schools in my district—like every other around the country, but perhaps more blatantly—functioned as sorting hats for socioeconomic class. To provide a generalized representation, my school was divided into children of predominantly Asian immigrant and white parents who had upper-middle class managerial and engineering jobs, and children of Latin American immigrant parents with blue-collar jobs. The school’s environment of academic rigor sorted kids into categories, ostensibly merit-based, which bore suspicious resemblance to the ones occupied by our parents.
Standardized test scores and grades served as the instruments of sortation. By the time I was in high school in the late 2010s, the ACT and SAT had long been boiled down to their essences. Shrewd entrepreneurs identified that these exams have much more to do with specific ways of thinking—strategies useful only during 3-hour proctored tests—than with cognitive abilities. Accordingly, there was (and is) a booming industry of test-prep academies which promised to impart these esoteric skills for a hefty hourly. These courses were largely occupied by the wealthy, and that was reflected in the scores. Unsurprisingly, scores on the SAT and ACT are more reliable representations of socio-economic status than cognitive acumen.
At my school, it was probably more widely recognized than usual that grades meant everything. Learning (real learning, the type where information is incorporated into your holistic understanding of the world) was extraneous, and something to be employed in service of grades. You may label this overly cynical; it struck me then and now as astute, a cogent observation of educational dynamics. If you learned a great deal about the natural world and your place within it in your AP Biology course but were inconsistent with submitting work products, you would receive a poor grade in the class. On the other hand, if you mindlessly grade-chased, crammed for exams, and turned in every assignment on time, you would be rewarded with an impressive grade in the class—even if you retained nothing. The latter was abundant. It seemed to be the norm for kids who needed excellent grades to attend their desired schools, which was to say, just about everyone.
What then, does this have to do with AI essays?
Everything.
It is this type of attitude toward education, this educational realism, that leads students to use every tool at their disposal. They are hustlers forced to play a game they know is rigged, doing what they have to do in a completely transactionalized educational landscape in service of obtaining academic currency—which is, they are told, to be later exchanged for economic currency.
Let’s simplify. A teacher sends a student home with a packet of three-digit multiplication problems to solve for homework. That student now has a choice: they can attempt, laboriously, to solve all the problems by hand and risk their grade in the process, or they can cut their labor time significantly, use a calculator, and ensure a perfect grade. The second option may strike you as dishonest, but I’m convinced that it is more honest. When would someone ever find themselves in the position of desperately needing to multiply large numbers without a calculator? If the stakes of such an inane task were so high, calculator use would be mandated.
The use of AI for school essays is a parallel phenomenon. ChatGPT is a calculator for high school and college essays. Every time that a teacher assigns a graded essay, they acknowledge and exercise the power they hold over students to coerce them into work, with the implicit threat of academic and economic punishment held in reserve. With the threat of academic failure, there is little reason to pay lip service to convincing students of the utility of an assignment. Bilaterally, the social relationship degrades. Teachers begin to see students as not-quite-people who do as they are told or else, and students begin to see teachers as the first in a long line of gatekeepers to economic opportunity who must be appeased before they grant passage. With learning for its own sake out of the equation, given its complete material subordinacy to the measurable hard logic of grades, savvy students choose the calculator every time.
The reality of the capitalist education system is that providing desired answers for the purpose of positive evaluation is rewarded. Pursuing knowledge for its own sake and, crucially, at its own pace, is punished by comparison. When we give assignments to students with no thought to their agency or interests, we should consider their use of AI to satisfy the assignment’s requirements as a natural adaptive response. Students are given no choice but to write the essay and be evaluated on it; why should they dignify a disrespect to their agency with the intimacy of their reflections, not to mention their life’s time? Why would they not give the teacher exactly what they think the teacher wants to hear? If we’re being honest, ChatGPT can write that essay better than 90% of people, not just high school students. If that is the case, maybe we should leave those essays to the chatbots. If anything, we should be celebrating students’ resourcefulness: finding a way to not waste time writing an essay not volunteered for in the first place is the type of canny self-advancement prized by an every-man-for-himself economic system.
We are in a new era: AI chatbots have perfected the writing of the perfunctory, formulaic academic essay. AI’s exposure of cracks in the foundations of the capitalist education system is not due to inherent insidiousness of the technology or a pathology in the new generation of students. Tools which grant students the ability to affect the power imbalance in schools, reducing work for themselves while maintaining the quality of the academic product, will always create uproars about brain-rotting and accountability. As far as I can see, we have two options. We can either fundamentally change the way we conduct economic life and its corresponding initiation via evaluation-based education, or we can put our heads back down and decide how we are going to absorb this new disruption into the curriculum in a way that restores the stability of the old order. Until the next disruption comes around, that is.
