As the website formerly known as Twitter continues its slide into irrelevance, posts like this one have become more common. Don’t, says the would-be teacher, whose name was removed from their post before I encountered it, ever begin a sentence with the word “so.” Similar posts rehash Stephen King’s notorious demand to excise every possible adjective or adverb. Others include some variation on Quiller-Couch’s orphan injunction: “Kill your darlings.”
I acquired my distrust of gnomic advice early. In grade-school creative writing, teachers warned students to avoid the word “said” in dialog tags, lest our writing become monotonous through repetition. Somewhere between there and college, the zeitgeist shifted. Now writing teachers admonish us to avoid any word other than “said” in dialog tags, lest we become overblown, adjectival, and show-offy. Thus I realized that writing advice is faddish and insubstantial.
Stephen King warns writers to avoid adjectives and adverbs, and to remove all unnecessary content, advice he notably doesn’t follow. His books are often so overwritten, they’re physically difficult to hold. Matching advice to trim supporting characters, side plots, and world-building digressions, aim to reproduce the terse, telegraphic prose famous from Ernest Hemingway or Elmore Leonard. In other words, it’s about following the crowd to bestseller status.
This week, as I contemplated how to rebut these specious proverbs, seemingly designed to produce work that authors and audiences inevitably hate, news came down the pike. According to The Chronicle of Higher Education, Arizona State University has launched a pilot program with OpenAI, owner of ChatGPT, to tutor freshman composition students. This becomes another opportunity to trim boring old teachers, with their salaries and demands, from the education process.
Despite the popular rhetoric, ChatGPT isn’t artificial intelligence; it’s a computer learning heuristic, a dynamic program designed to let computers learn from existing material. In particular, ChatGPT browses existing prose content, and teaches itself how to construct content passably similar. Technology philosophers argue whether it has any capacity to understand the content it reads or creates, but it doesn’t matter. ChatGPT is essentially a high-tech mynah bird.
Stephen King |
Therefore, if we expect ChatGPT to teach college-level writing, we can at best anticipate that it will encourage students to write sentences and paragraphs which fit the program’s heuristic. It will regard individual voice, unique narrative, or personal interest as distractions. Just as ChatGPT itself only produces prose that satisfactorily resembles existing prose, it’ll demand likewise from students. This will produce bland, unoriginal writing that its own writers hate.
Much like the rules-based writing taught on Xitter.
Learning heuristic writing defies the purpose of college writing. It presents prose not as an attempt to explore the human condition, convey valuable information, or spin a constructive line of bullshit, but as a product to extract, like coal from a mine. If extracting text from human writers proves too costly, time-consuming, and laborious, fire them; outsource it to machines. Human writers are dangerous anyway, and use excessively big words.
Elaborate, oppressive writing rules share the same message. Excising adjectives and adverbs, the words that give nouns and verbs their flavor, or trimming side quests to create a sparse narrative that translates to film, all declare authors the enemies of prose. Anything that shows individual personality or a spark of character impedes slick commercial prose, which should roll out like cars from a Detroit assembly line.
Please understand, I’m not exaggerating. I’ve used this example repeatedly, but it remains relevant: according to Charles Duhigg, record label executives expected Outkast’s single “Hey Ya” to become a runaway hit. Not because they particularly liked it, but because in-house software declared it sufficiently like previous hits that passive listeners would devour it. When it initially struggled, industry insiders gamed the market to achieve the software’s projected outcomes.
Now we’re applying similar principles to writing. By making new prose sufficiently similar to existing prose, and excising any spark of character or enjoyment, we ensure readers can consume writing passively, like they consume petroleum. It matters not one whit whether writers craft novels, scripts, business reports, or journalism. We expect everyone to read as submissively as if they’re doomscrolling Xitter.
As a teacher, I cajoled students to use their own voices; several of them succeeded. Several youths who grew up resenting reading and writing found an unanticipated joy when they discovered writing wasn’t just crinkum-crankum grammar exercises. But now forces want to walk this progress back. Whether it’s human-made rules or computer learning heuristics, some forces would make writing as bland and joyless as possible.
No comments:
Post a Comment