With support from the University of Richmond

History News Network puts current events into historical perspective. Subscribe to our newsletter for new perspectives on the ways history continues to resonate in the present. Explore our archive of thousands of original op-eds and curated stories from around the web. Join us to learn more about the past, now.

Take notes, Nate Silver! Reinventing literary criticism with computers

As a literary critic who says he aims to study books without actually reading them, Franco Moretti has positioned himself as an iconoclast. He has described what most non-academics think of as the essence of literary criticism — singling out the best works and describing what’s great about them — as a “a theological exercise.” Instead, as a co-founder of the Stanford Literary Lab, Moretti (with his students) “discusses, designs, and pursues literary research of a digital and quantitative nature.” That means treating books like data: taking massive digital archives of texts and using computers to scan them for patterns no human reader would have the time, attention or patience to sift out. There’s no set name for this method; it’s been dubbed everything from “quantitative stylistics” to “computational criticism.”

You’d think that mainstream literary critics would decry Moretti’s approach as bloodless or absurdly obvious, and indeed, some have. Yet Moretti’s 2013 book, “Distant Reading,”recently won the National Book Critics Circle award for criticism. NBCC board member Anne Trubek praised “Distant Reading,” a collection of Moretti’s essays, as “compelling cultural and historical analyses” that are “light on declamations and heavy on a sort of wide-eyed excitement and curiosity.” She’s right: If you conflate academic literary criticism with impenetrable jargon suspended in a void light-years away from the books that make people want to study literature to begin with, Moretti will surprise you. His prose has a vigor, clarity and informality that would delight any reader. I spoke with him on the phone recently about his work, including its almost unprecedented focus on the history of the vast numbers of bad and forgotten books published every year.

One of the aspects of your work that’s the most counterintuitive at first glance is that you’re not that interested in studying literary masterpieces. You study literary works in large masses, regardless of whether they’re good or not. You’re not looking at “Middlemarch”; you’re looking at 7,000 mostly mediocre Victorian novels. Why is that interesting to you?

First of all, those novels were there, it’s just that there wasn’t the desire to understand what those 7,000 — or, rather, 6,900, if we don’t count the ones that are still being read today — authors had in mind when they were writing. Why do so many people write things that others don’t like to read in the end? What is going on?

[Laughs] I wonder that all the time in my work. I can see why the National Book Critics Circle would find that a good question.

It’s really a question of social history and conventions. I’m interested in understanding the culture at large, rather than just its best results. I have no doubt that canonical books are best — although we can spend days arguing what “best” means. But it’s not enough for me to understand that. I want to understand the broader conventions, the field of attempts and failures, hoping that that may tell us something significant about the culture we live in or that others have lived in.

The truth of the matter is that we haven’t yet completely succeeded. We have these new tools, these telescope-like things that allow us to see many more texts than was possible before, just like the telescope allow Galileo to see many more stars....


Read entire article at Salon