With support from the University of Richmond

History News Network

History News Network puts current events into historical perspective. Subscribe to our newsletter for new perspectives on the ways history continues to resonate in the present. Explore our archive of thousands of original op-eds and curated stories from around the web. Join us to learn more about the past, now.

How Freaked Out Should Professors Be About Artificial Intelligence Language Tech?

Over the weekend, lots of folks working in academia learned that the OpenAI, ChatGPT interface is now capable of producing convincing (though uninspired) college student quality writing to just about any prompt within seconds.

Some are worried that this is “the end of writing assignments,” because if a bot can churn out passable prose in seconds that wouldn’t trigger any plagiarism detector because it is unique to each request, why would students go through the trouble to do this work themselves, given the outcome in terms of a grade will be the same?

Why indeed?

I was less freaked out than most, because I’ve had my eye on the GPT3 large language model for a while, having written previously in March of 2021 about the challenges the technology poses to the work of teaching and learning. Even so, I found the algorithm’s leaps since that time remarkable. It now produces entirely fluent, competent (though dull) prose on just about any prompt you want to give it. It is not flawless, can be tripped up by particular requests and will convey incorrect information, but it is extremely convincing.

So what are we supposed to do about this? I have a number of ideas, but to start, I think we should collectively see this technology as an opportunity to re-examine our practices and make sure how and what we teach is in line with our purported pedagogical values.

ChatGPT may be a threat to some of the things students are asked to do in school contexts, but it is not a threat to anything truly important when it comes to student learning.

The first thing to know is that ChatGPT has no understanding of content and does not evaluate information for accuracy or importance. It is not capable of synthesis or intuitive leaps. It is a text-generating machine that creates a passable imitation of knowledge, but in reality, it is just making stuff up.

That said, I fed ChatGPT a bunch of sample questions from past AP exams in literature, history and political science, and it crushed them. Now, in many cases I did not know enough to evaluate the accuracy of any of the information, but as we know, accuracy is not necessarily a requirement to do well on an AP exam essay. The fact that the AI writes in fully fluent, error-free English with clear structure virtually guarantees it a high score on an AP exam.

Read entire article at Inside Higher Ed