Why I'm Not Afraid of ChatGPT

Roundup
tags: technology, teaching history, artificial intelligence

Each time I embark on a new writing project, I find that I’ve forgotten how to write. I type and delete sentence fragments. I list claims in a random order — then decide that most of them are indefensible. It feels awful. I feel stupid. But from long experience, I know these feelings will eventually subside. Soon, I’ll see the outline of an argument; I’ll trace it badly, then better, then well. At some point, I’ll start imagining an audience whose phantom quibbles and confusions can be addressed by writing better.

This is what I value most in writing: the way it carries me from confusion to understanding, enforcing standards of clarity and persuasion along the way. I learned this by writing essays for my own humanities professors — and it’s what I now try to teach my students.

The recent release of ChatGPT, a language-generating tool from OpenAI, has inspired dark fantasies in the minds of some humanities teachers. “The College Essay Is Dead,” they declare; we are facing “The End of High-School English” — the titles of two essays from The Atlantic. But these concerns are not so much about writing, understood as a process and an adjunct to thought, as they are about writing assessment, understood as a tool for sorting students and awarding distinctions. How will we “judge” our students accurately, asks Stephen Marche, when the writing process “can be significantly automated”? What will replace writing assignments “as a gatekeeper [and] a metric for intelligence?” asks Daniel Herman. This focus on assessment then calls into existence the kind of student most easily assessed: one entirely unentangled with technology.

But if we treat learning (not distinction) as the goal of education, then generative AI looks more like an opportunity than a threat. As software that can simulate human thinking, it may indeed create some thoughtless students who rely on it too heavily. But it might also create students who are ready to think twice, to push beyond statistically likely ways of thinking. This sort of student, ready to demand more than AI can provide, will be precisely what an age of generative AI requires: people who understand the difference between human and machine intelligence, and who therefore won’t mistake its glibbest outputs for the horizon of all human thought.

In early December, I decided to prove this point by staging exactly the scenario that is giving some of my peers in the profession indigestion: I asked students to spend an hour trying to get ChatGPT to write a draft of their final projects for them. Before I set them loose, however, I wanted to model how to engage critically with ChatGPT. So, I briefly shared and analyzed my own attempts to get ChatGPT to write a final lecture for the course, a gen-ed English lecture called “Listening to Podcasts,” which introduces students to the history of podcasts and teaches them how to analyze different podcast genres across time.

After spending much of the previous evening with ChatGPT, I had landed on the following prompt for it: “Write a lecture about how podcasts are developing toward greater complexity and aesthetic ambition.” I had tried broader prompts in hopes of getting more complicated responses, but they produced only boring boilerplate. I had also tried giving it a sequence of arguments to make, but this only made each argument shallower — while also highlighting ChatGPT’s failure to sustain the logical connections I had provided between one argument and the next. Instead, I had found the most success by giving it a single, simple argument to make. That’s what I shared with my students: six paragraphs made of 430 words.

Read entire article at Chronicle of Higher Education