Showing posts with label criticism. Show all posts
Showing posts with label criticism. Show all posts

Friday, August 30, 2024

Some Overdue Thoughts on Neil Diamond

Neil Diamond in 1971, the year he
released “I Am… I Said”

I started ragging on Neil Diamond’s 1971 top-five hit “I Am… I Said” years before I heard it. Despite its high Billboard ranking, it generally isn’t regarded among Diamond’s greatest hits—let’s acknowledge, it’s no “Solitary Man” or “Sweet Caroline.” It doesn’t get extensive classic rock radio airplay like others of Diamond’s peak career recordings. Even for many fans, it’s largely a cypher.

Therefore, when humorist Dave Barry made it a recurring theme to belittle Neil Diamond in general in the 1990s, and “I Am… I Said” particularly, I didn’t blink. I knew Barry’s mockery was exaggerated for comic effect, because no matter how earnestly over-written Diamond’s hits were, hell, the man still wrote “I’m a Believer” and “Cherry Cherry,” and I’ll fight you if those aren’t classics. But “I Am… I Said”? Surely radio programmers buried it on purpose.

Barry quoted Diamond’s lyrics, particularly the central hearing-impaired chair, extensively. He said nothing about Diamond’s music, his life, or the cultural context amidst which Diamond wrote. Barry simply threw out Diamond's refrain lyrics, which aren’t exactly Robert Frost. Without context, and especially without the more subdued stanzas surrounding the refrain, the lyrics looked bathetically ridiculous, like an Angora cat in the rain.

Superficially, I had no reason to believe Dave Barry wasn’t representing Neil Diamond accurately. If I’d thought more deeply, I would’ve realized Barry also pooh-poohed “Cracklin’ Rosie,” which is maybe a bit overproduced but seriously still slaps. Cool, rational thought might’ve told me that, if Barry disparaged a banger like “Cracklin’ Rosie,” maybe his representation of “I Am… I Said” wasn’t wholly reliable.

In my limited defense, I hadn’t turned twenty yet.

Years later, I finally heard the song. When my local radio station started playing the opening riff and first stanza, I clearly identified it as belonging to the 1970s, a decade when hippie utopianism began surrendering to ennui, age, and the realization that it required more than optimism to change the world. Though most artists didn’t record anything quite this melancholy until after 1973, it’s instantly recognizable in its time.

More importantly, “I Am… I Said” is pretty good. It isn’t Diamond’s best, not in a career that produced classics like “Red Red Wine” and “Kentucky Woman,” but it’s a substantial glimpse into the psyche of a man facing his own age and mortality. The contrast between Diamond’s understated, more poetically complex stanzas, and ostentatious orchestra behind his choppy refrain, presages later anthems to adult futility, like Nirvana’s “Smells Like Teen Spirit.”

Neil Diamond in 2018, the year his
health forced him to retire from touring

I believed Dave Barry’s criticisms in the 1990s because I hadn’t yet heard Diamond’s song, and I presumed Barry represented the song accurately. I realized Barry, a humorist, might privilege the joke above facts. Yet in 1993, when one couldn’t check YouTube or Spotify to verify the source, I chose to assume Barry was essentially honest. I adopted Barry’s jokes as my own opinion, and repeated them for nearly thirty years.

Everyone sometimes adopts others’ opinions as our own. Nobody can possibly have encyclopedic knowledge of, say, climate science or presidential politics or big-ticket TV productions. We must trust scholars, critics, friends, and others. When that happens, we must obviously evaluate whether that person’s opinion is trustworthy enough. Is the scholar scholarly enough to be reliable? Has the movie reviewer seen enough movies?

Dave Barry is probably the funniest White person of my lifetime, a man who often extracted comedy from well-written descriptions of furniture. He commanded language to cultivate emotions in readers, without depending on voice and performance, a mark of somebody who thinks deeply about every word and phrase. Because he commanded written English with an ease I find enviable, I presumed Barry must’ve thought equally deeply about his subjects.

It never occurred to me that Barry might’ve misrepresented his subject, or omitted information that would’ve influenced my opinion, such as Diamond turning thirty, divorcing his high school sweetheart, or having little to show for his career. I trusted the evaluation of a critic who, it appears, was more invested in the joke than the facts. Barry’s take-down of Diamond’s lyrics remains hilarious, but frustratingly divorced from reality.

This forces me to ponder: what other untrustworthy “experts” have I trusted? As an ex-Republican, I certainly shouldn’t have trusted P.J. O’Rourke and Thomas Sowell, who influenced my early politics. My parents admitted the ideas they taught me were often informed by fear. Much of adulthood involves purging false teachings from untrustworthy mentors who concealed their agendas.

And that chair totally heard you, dude.

Tuesday, October 17, 2023

The House of Usher and the Fall of Capitalism

Promo photo from Netflix’s The Fall of the House of Usher. (the pic is deliberately off-center)

Near the climax of Netflix’s new adaptation of The Fall of the House of Usher, Madeline Usher (Mary McDonnell) delivers a monologue almost directly at the camera. Half-mad with grief, Madeline rants about the American condition, taking in almost everything synoptically: runaway consumerism, prescription drug abuse, wage stagnation, the Dodds decision. Then she asks her moon-eyed brother Roderick (Bruce Greenwood): are we culpable for this? Or do we just provide services which market forces demand?

The ”we” in this question is the rich. In Poe’s original short story, the Usher family’s wealth is poorly defined, the inheritance of a waning feudal aristocracy, and Madeline barely speaks. But in this adaptation, the Usher family’s wealth is very specific: they own Fortunato Pharmaceuticals, which aggressively markets an opioid drug to pain sufferers worldwide. This unsubtle dig at the Sackler family, owners of Purdue Pharma, which manufactures OxyContin, is just one anti-capitalist theme.

First, the overview. This is filmmaker Mike Flanagan’s fifth Netflix limited series, and the third adapting American horror properties, after The Haunting of Hill House and The Turn of the Screw (as The Haunting of Bly Manor). Like the previous two, this one samples liberally from the source author’s collected works, adapting the spirit of the text rather than the words. It’s beautifully paced, somber, and elegiac, but also gory enough for seasoned horror fans.

Flanagan’s previous adaptations have frequently commented on contemporary society. Midnight Mass has called out religious intolerance, for instance, and the ways that outward piety invite moral rot into communities. Usher, however, is distinct for not only having one-to-one criticisms of contemporary issues, but also for calling those issues out by name. In later episodes, several characters deliver monologues like Madeline’s, sounding suspiciously like courtroom closing arguments. Fitting, since the frame story is Roderick Usher’s “confession.”

The Usher twins have achieved earthly power by selling consumers what they think they want. Early on, in flashback, young Roderick (Zach Gilford) pitches the motivating opioid drug, Ligadone, by promising “a world without pain.” It’s unclear whether he knows, making that pitch, that his drug doesn’t cure pain, only defers it until later. In later episodes, he clearly does know this, eventually. But he tells regulators and corporate execs what they want to hear.

Carla Gugino as Verna, a mysterious presence hovering over the house of Usher

Roderick’s six children represent different forms of vice. None grew up with Roderick as primary caregiver, and he bought their affection by showering them with money, and the appurtenances money can buy. Therein lies an important difference between Poe’s world, and Flanagan’s. In Poe’s “The Masque of the Red Death,” fat aristocrats have their party to flee a plague-ravaged world. Flanagan’s Prospero Usher has the same party, but flees a world of excess, not lack.

Of Roderick’s six children, three explicitly abuse drugs. Two others indulge bizarre, distorted sexual appetites. Of the six, only Victorine, a surgeon, lacks conventional vices; but she’s a workaholic. Just because an addiction isn’t illegal, doesn’t mean it isn’t harmful. As addiction specialist Dr. Gabor Maté writes, addictions are almost always manifestations of the abuse and neglect we’ve suffered, either as children or adults. The Ushers got rich selling substance abuse, while indulging covertly themselves.

In other words, the Ushers endure the same capitalist “win or die” mentality they’ve exploited to harm others. They simply game the system sufficiently to defer their suffering onto the future. Money grants the family an illusion of control, and encourages Madeline’s frequent exhortations that the Usher family is destined to change the world. (Madeline’s unflattering parallels with Hilary Clinton’s post-hippie idealism grow increasingly pointed.) But the Ushers never fix the system, only exploit it.

The series’ conclusion implies a just universe, where the more pain the Ushers have deferred, the faster it rolls over them. Flanagan, an atheist, doesn’t believe the Ushers will face consequences in “the next life,” but he declares everyone will answer for their transgressions. The script compares the Ushers to Donald Trump, the Koch brothers, and Mark Zuckerberg, all facing greater or lesser consequences for their actions. Flanagan apparently believes our universe is ultimately just.

I must acknowledge Netflix, which has recently demonstrated a willingness to platform blunt anti-capitalist messages. Black Mirror’s recent sixth season is tightly self-referential, with episodes tied together by “Streamberry,” a media platform so obviously modeled on Netflix that one can only laugh. As the Disney model of conglomerate media has grown increasingly silly, Netflix has shown a willingness to laugh at itself. Evidently Reed Hastings can do something Roderick Usher can’t: change with the times.

Tuesday, August 29, 2023

Gender and Morality in Horror Fiction: First Thoughts

Victor LaValle

I enjoyed Victor LaValle’s latest novel, Lone Women, with its unromantic tone toward the American frontier, and its themes of pervasive loneliness. But the longer the story percolates within my memory, the worse the novel’s resolution sits with me. (Spoiler alert.) Having overcome the Montana community’s narrow-minded attitudes and its easy recourse to violence, the titular women decamp upriver. Their reputation moves beyond them, though, and soon, they’re the nucleus of a women’s prairie utopia.

According to LaValle’s acknowledgements page, this resolution comes partly from his wife, Emily Raboteau, who asserted that women paying for men’s egos has become cliché. Which, in fairness, it has. I appreciate the sentiment, yet the outcome is awkwardly moralistic. The villains, who behave in patterns we recognize as villainous because they’re basically lynchings, pay for their transgressions. The heroines, having passed through mortal terror, emerge baptized in purity, offering to fix society for us.

I don’t want to disparage uplifting or redemptive endings. In a world where justice and deliverance often seem painfully rare, fiction often reminds us that the possibility still exists, and remains worth striving after. But simultaneously, horror fiction often reminds us that injustice still exists, that outcomes don’t reflect our deserving, and life is frequently outside our control. And I’ve noticed an unexpected pattern: the authors most willing to embrace horror’s injustice are often women.

Catriona Ward

Please don’t misunderstand: this judgement comes, I confess, from a place of ignorance. Though my generation grew up with the 1980s slasher movie boom, my parents strictly enforced age-appropriate content bans. As children do, I worked to retroactively justify that I myself didn’t enjoy horror, until even I believed it. Though I dabbled in Stephen King or movies like Near Dark in my twenties, I didn’t embrace the horror genre until well into my forties.

So I probably have an excessively small sample for broad generalization. From my perspective, however, women not only seem more willing to put readers through a more aggressive gantlet, but also have courage enough not to walk back the pain. I remember that, in Stephen King, the dead stay dead, and he doesn’t offer Jack Torrance or Margaret White easy redemption arcs. But he usually concludes with family-oriented bonding, and a sense of “lessons learned.”

Authors I’ve enjoyed recently, like Ania Ahlborn and Catriona Ward, avoid this. Ward’s The Last House on Needless Street features several alternating viewpoint characters, most of whom have their stories resolved, though not necessarily happily. Ted Bannerman gets a shot at healing, but only after his beloved cat Olivia must disappear, and he must forcibly restrain his daughter Lauren. And Ted’s neighbor Dee, whose story runs on obsessive, vindictive illusions, doesn’t receive a proper funeral.

Ania Ahlborn

Ahlborn doesn’t even offer her characters that. In Brother, Ahlborn’s protagonist Michael is meek, amiable, and endearing, and we want him to wind up with the girl who clearly loves him. But we also realize Michael doesn’t deserve redemption. He’s been complicit in his family’s crimes for over a decade. Ahlborn’s bleak, apocalyptic conclusion is almost poetic in its justice, at least for Michael, if not for everyone he’s hurt through childlike malice or inaction.

Perhaps my first deep dive into horror cinema, Leigh Janek’s Fear Street trilogy, adapts this loosely. Trilogy protagonists Deena and Sam get their happy-ending kiss, eventually, after enduring constant supernatural abuse. But in the first film, Kate and Simon suffer gruesome onscreen deaths which reset the trilogy’s tone. The second film features a scene where the killer corners a roomful of children, and we’re sure director Janek will somehow spare the kids. But we’re wrong.

Stephen King and Victor LaValle apparently think the horror their characters endure must “mean something,” that despite the pain, the survivors gain something from the experience. We could continue: The Exorcist’s Father Damien Karras dies, but Chris McNeil gets her daughter back. People must suffer, but things ultimately end “well.” Ward, Ahlborn, and Janek seemingly don’t agree. Though Ted Bannerman or Deena and Sam have their “happy endings,” their trauma isn’t redemptive, it’s just awful.

At present, I have only thoughts, no explanations for this apparent divide. Perhaps some difference in how women experience a society that centers men’s pains and sacrifices. Perhaps a natural ability to absorb physical pain lets them not minimize psychological pain. Perhaps there’s no explanation, and I’ve simply observed a coincidence. Time and familiarity with the literature might explain everything. Or maybe I’ll discover a better class of confusion. Either way, I’ll have fun learning.

Wednesday, August 2, 2023

Neil Gaiman and the Road to Truth

Michael Sheen and David Tennant as Aziraphale and Crowley in Good Omens 2
This essay contains spoilers.

Neil Gaiman and Terry Pratchett’s novel Good Omens specifies that Aziraphale and Crowley, the angelic protagonists, don’t have a sexual relationship. Though Pratchett passed away in 2015, Gaiman maintained this parameter when adapting the novel for the 2019 BBC/Amazon joint production. Though he didn’t deny anybody their personal headcanon, he rejected the idea that Aziraphale and Crowley’s relationship was anything but platonic.

Therefore it’s sudden and jarring in the final minutes of Good Omens 2 when (seriously, spoilers) Crowley grabs Aziraphale roughly and kisses him. This becomes the first moment that concretely sexualizes the characters. Throughout the season, Aziraphale and Crowley struggle to create a meet-cute between their neighbors, Nina and Maggie. But they’ve failed miserably because their knowledge of human romance comes entirely from Richard Curtis movies and Jane Austen novels.

Understanding the change requires understanding the context. Though Gaiman and Pratchett share billing on the original novel, Pratchett did most of the actual writing; Gaiman, a novice prose writer, wasn’t equipped to write an entire novel. Pratchett wanted to remain faithful to the Abrahamic mythology their novel satirized, which meant that transcendent beings lacked binary gender. To pinch a Kevin Smith line, angels are as sexless as Ken dolls.

Although Good Omens 2 is co-written by Gaiman and John Finnemore, it’s the first time the setting reflects exclusively Gaiman’s vision. And it bears noting Gaiman’s other recent streaming success: Sandman on Netflix. Not only does Sandman contain a noteworthy number of same-sex couples, Gaiman even gender-swaps John Constantine, a longstanding DC Comics character, to create increased Sapphic tension. Same-sex partnerships mean something to Gaiman.

In Sandman episode 5, Bette, a diner waitress, expresses purblind views about sexual identities. She claims Judy, a regular customer, is too pretty to be a lesbian, and engineers a meet-cute (another theme) with another customer, Mark. But when John Dee, empowered by Dream’s magic ruby, stops everyone lying and sheds their inhibitions, Bette and Judy find themselves entangled in a passionate embrace. That, the story implies, is their truth.

Throughout Sandman, Gaiman uses same-sex relationships as shorthand for characters who follow their own moral code. Johanna Constantine, Bette and Judy, Hal Carter, and Chantal and Zelda are all depicted as characters unbeholden to convention, free of judgement, and wholly alive. This freedom isn’t necessarily “good” in any moral sense, as The Corinthian’s ravenous sexuality is second only to his murderous impulses. But it does mean one is unbound.

Shelley Conn and John Hamm as Beelzebub and Gabriel in Good Omens 2

Good Omens depicts a world deeply bound to binaries. Good and evil, Heaven and Hell. We glimpse both eternal realms: Heaven is orderly, brightly lit, and aseptic, while Hell is noisy and cluttered, and several denizens show signs of gangrene. Both realms also keep demanding transcendent beings, like Aziraphale and Crowley, and their human allies, make binding declarations for one side or another. They demand complete moral absolutes.

Crowley and Aziraphale, however, spend the entire series finding ways to thread the moral needle. Both beings balk, for instance, at the biblical Job’s predicament, with its requirement to kill, and gradually devise a workaround. When they find an urchin robbing graves to escape poverty, Aziraphale learns that humans face degrees of wrong, while Crowley decides that death doesn’t resolve his sympathies. Broken moral bromides litter this story like flies.

Therefore, reaching the series culmination where (again, spoilers) the Metatron offers Aziraphale command of Heaven’s forces, this pushes the limits of Gaiman’s disdain for moral absolutes. By accepting the offer, Aziraphale must accept Heaven’s moral straight jacket, something Crowley can’t do. Crowley would rather continue mapping his own moral landscape, something both powers have done successfully for millennia. But, as Aziraphale notes, they have little to show for it.

Gaiman believes, from the textual evidence, that all absolute morals eventually collapse. But that doesn’t mean seeking one’s own moral resolution makes everything better. Johanna Constantine, John Dee, and now Aziraphale and Crowley have manufactured their own moralities, evidenced by their rejection of sexual identity myths, but they’re also terribly lonely. They occupy society’s margins, with only a few friends. Their stand requires courage and durability that most people lack.

Where Crowley kisses Aziraphale, therefore, it’s arguable whether the action is sexual. Maybe the characters remain, as both authors assert, essentially sexless. But Crowley demands, with his kiss, to know whether joining Heaven’s moral absolutes will, as Aziraphale claims, make a difference. Does morality, without context, mean anything? Neil Gaiman seemingly thinks not. Truth may be a lonely road, Gaiman suggests, but it’s the only one worth walking.

Friday, July 28, 2023

The Women’s Odyssey

Maria Tatar, The Heroine with 1,001 Faces

What do women do while men leave to vanquish dragons and cross trackless seas? Is it possible for a woman to be a hero? Joseph Campbell, whose major work The Hero With a Thousand Faces popularized the ides of a “hero’s journey,” believed heroism was a singularly male pursuit (while teaching at a women’s university). In Campbell’s precepts, femininity is womb and grave, wife and temptress, a heroic man’s original source and his ultimate destination.

Maria Tatar, professor of German and children’s literature at Harvard University, sees a second track running under mythology. While men become heroes by leaving home and swashbuckling through the world, women often become heroic in how they resist. Put another way, heroism is something men find; it’s something women have thrust upon them, sometimes bodily. Tatar unpacks threads of feminine heroism from classical mythology and medieval folklore to modern Hollywood, sometimes with decidedly mixed results.

In the oldest mythology, Tatar finds women struggling to maintain an identity when men try to constantly control them. Helen of Troy finds herself passed, hot potato-like, between the hands of male heroes, her story getting lost along the way. Philomela literally loses her voice to her rapacious brother-in-law, who severs her tongue after violating her; but she reclaims her voice through embroidery. “Women’s work” becomes how she reclaims her voice and receives deferred justice.

Similar themes recur in Tatar’s telling, but importantly, when women find their voices, others take those voices away again. Arachne, the famous weaver whose skills challenge the gods, is a good example. In Ovid, the goddess Athena punishes Arachne, not because her weaving is excessively superior, but because she uses her weaving to call out the injustices of the Olympian gods. Modern mythologists reverse this, though, turning her into a moralistic warning against simple pride.

Nor are the connections to modernity incidental. Then as now, women seek the autonomy to tell their own stories, which they can frequently only achieve through subversive means. Consider how the #MeToo movement won its incremental successes despite, not because of, conventional media. Women fight a system designed to preserve the status quo of power and freedom, even when the existing system rewards the already excessively rewarded, and silences those who call injustice by name.

Maria Tatar

Tatar especially appreciates women who bring the ancient unresolved questions into the modern world. She extensively unpacks authors like Margaret Atwood, Pat Barker, and Madeline Miller, who rewrite the classical myths from a woman’s viewpoint. In the Homeric traditions (which Joseph Campbell considered normative), women are either largely voiceless, like Penelope or Briseis, or downright villainous, like Circe. Tatar loves when women writers return to the ancient well and give silenced women their own voices.

Continuing into medieval folklore, Tatar examines the same themes as they recur—or, just as importantly, as they’re silenced. French fairy tale author Charles Perrault writes in “Bluebeard” of a woman captive to a terrible husband, who discovers the truth, and is rescued by her brothers. But when the same story reappears in the oral tradition, usually by women, as in the Brothers Grimm’s “Fitcher’s Bird,” the beleaguered bride rescues herself, because there’s nobody else.

Tatar’s explanatory skills work best in the classical and medieval myths that mostly inspire her and Campbell. Moving into the modern era—which, since the middle Twentieth Century, mostly means movies and TV—her critical skills become more synoptic and brief. Maybe she expects her audience to already be familiar with the Hollywood stories she mostly just mentions and briefly describes. But the product feels rushed; she doesn’t so much unpack Hollywood as name-check it.

That said, she does describe the thread of women’s resistance to worldly injustice. From Cassandra, who gets mocked and derided for speaking the truth, and Scheherazade, who tames the destructive monarch by telling tales, to modern mythic tales like Little Women and Wonder Woman, Tatar sees something continuous. Women through literary history have established themselves by telling their counter-narrative, keeping their stories alive against men. Women survive by preserving and by passing along their stories.

Maria Tatar is hardly the first scholar to postulate a feminine analogue to Campbell’s “hero’s journey.” This book’s Amazon page links to at least two books entitled “The Heroine’s Journey.” Tatar brings her contribution, a knowledge of classical and medieval mythology as capacious as Campbell’s own, arrayed thematically to demonstrate that women are no less heroic, just because they don’t conquer. Women, arguably, do something more heroic: they face an unjust (male) system, and survive.

Also by Maria Tatar:
Enchanted Hunters: The Power of Stories in Childhood

Thursday, June 29, 2023

Monty Python and the Problem With Religion

The six members of Monty Python on the set of The Life of Brian in Tunisia, 1978

My target audience largely catches the reference when I write: “He’s not the Messiah, he’s a very naughty boy!” Brian Cohen's climactic sermon in Monty Python’s 1979 classic The Life of Brian is among cinema history’s funniest scenes. Brian desperately wants the crowd that has adopted him as Savior to think for themselves, but they won’t listen. Brian’s domineering mother just wants the crowd to go away, but inadvertently riles them up.

Brian’s only fully-developed homily turns on one statement: “You’re all individuals! You’re all different!” But his followers respond, in the paced unison viewers remember from hearing congregations pray together: “Yes, we’re all different!” To the Monty Python members, all unbelievers, this statement probably summarizes the ways religion encourages groupthink and steals autonomy. But rewatching the movie recently, for the first time since VHS days, something struck me about his crowd:

They’re all Jewish.

Jews have maintained their collective identity since the Late Bronze Age, in substantial part, by maintaining their rituals. Jewish religious observance doesn’t rely upon individual belief, the way Christian and Muslim rites do. Instead, Jewish traditions involve reënacting pivotal moments in Jewish folk history, like the Passover or the Maccabean rebellion. Whether these events happened as enacted doesn’t much matter, though; what matters is, they do them together.

This doesn’t mean Judaism has always benevolently maintained that authority. By the late Second Temple Era, when Jesus preached, Judaism had developed stark sectarian divisions over “correct” observances, divisions only closed when the Second Jewish War saw most sects destroyed. Throughout the Gospels, Jesus’ repeated complaint against religious leaders holds that Jewish observance had become robotic, and deaf to the cries of the poor and dispossessed.

Monty Python mocks this robotic tendency through the “People’s Front of Judea,” a revolutionary sect comparable to the Zealots. We can encompass everything wrong with the PFJ in leader Reg’s legendary line: “All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh-water system, and public health, what have the Romans ever done for us?” Life under Roman occupation sounds pretty nice.

Brian Cohen (Graham Chapman) approaches his denouement in The Life of Brian

I might find this less jarring if it weren’t spoken by John Cleese, who was born relatively rich and attended Cambridge University, a bastion of British imperialism. The parallels between the feuding Jewish revolutionaries, and divisions within separatist groups like the Irish Republican Army, are too obvious to not support deeper analysis. Under Monty Python’s analysis, freedom fighters look ridiculous, while the empire, though hardly benevolent, never hurt anybody who didn’t deserve it.

Because of its strategic location along trade routes linking Africa, Europe, and Central Asia, empires have repeatedly conquered Judea and attempted to assimilate its people. Though Jewish ethnic identity arguably dates to Moses, Jewish religious identity dates to Jeremiah and the Babylonian Captivity. The Maccabees revolted against the Greek-speaking Seleucid Empire because the Greeks tried to forcibly assimilate the Jews, pollute their temple, and defile their women.

These tactics sound remarkably similar to techniques the British (English) Empire used to conquer other peoples. Closer to home, they’ve been remarkably successful, as Irish and Scots are minority languages in their homelands, and Cornish is extinct. Further afield, British forces caused native nations of India, Africa, and the Americas to abandon their separate identities and local wars, and create racial and ethnic allegiances to expel the invading empire.

Please don’t misunderstand: I’m not suggesting Monty Python is objectively pro-Empire. That exceeds the remit which the movie provides. However, like Agatha Christie or Roald Dahl, the Pythons emerged during the British Empire’s dying wheezes and, as a bunch of White, male, mostly heterosexual Oxbridge boys, realized the dying Empire was taking their privilege with it. To them, the dying Empire was their culture’s ambient background noise.

Yes, religion can steal followers’ individuality and autonomous thought. But turning people loose hardly works better, as post-Christian Western Civilization demonstrates. Without religion’s catechistic approach to building a soul, atomized individuals glom onto whatever political party, commercial enterprise, or pop-culture fandom offers them a desirable group identity. Despite Nietzsche’s claims, no person becomes fully individualized without a foundation to build from.

Brian’s followers aren’t stupid and mindless, as the movie implies. They’re simply an occupied people, seeking a leader to offer them a shared identity, goal, and strategy. Without their devotion to Brian, they have only commerce, arena sports, and worst, Roman imperial politics. There’s no guarantee Brian could’ve saved his people, certainly. But the alternative is assimilation. Look around you and ask how that’s working out.

Friday, June 23, 2023

Some Thoughts on the Nature of “Tragedy”

Stock photo of the Titanic wreckage

I avoided writing about the OceanGate Titan, the submersible that vanished while gawking at the wreck of the Titanic, while there was hope of rescue. Since it disappeared on Sunday, social media has been flooded with dollar-store schadenfreude mocking the passengers’ entitled hubris to treat the remote, and still poorly explored, wreck as a tourist attraction. But at this writing, oxygen reserves have run out, the vessel is probably lost, and the tone has shifted.

By Thursday morning (really Wednesday evening), reactions bifurcated. Some observers, perhaps motivated by the parallels between this event and the original Titanic sinking, began describing this event as a tragedy. Behind them, a rising tide of dissidents reminded audiences that an overloaded migrant vessel sank last week, potentially dragging over 600 impoverished Libyan refugees to the deepest part of the Mediterranean. Why, dissenters asked, isn’t this the real tragedy, not the submarine full of CEOs?

This debate reflects not only the priorities driving the 24-hour news cycle, but the way words drift over time. Mass media slings the word “tragedy” around so heedlessly that it’s come unmoored from its Greek roots. Aristotle defined tragedy as a theatrical form in which the protagonist is destroyed, not by bad luck or circumstance, but by the consequences of his (and indeed usually “his”) own actions. Then the horrified audience feels pity for him.

For Aristotle, the defining tragedy was Sophocles’ Oedipus Rex. You probably know the broad outlines, even if you haven’t read it or seen it performed. Oedipus, king of Thebes, promises to root out the cause of the curse plaguing his city. The prophet Tieresias promises Oedipus he won’t like the answer, but Oedipus persists. Following a detective-like investigation, he discovers that he caused the curse himself, and he’s already living out the morally horrific consequences.

Many modern writers forget tragedy’s most important point: Oedipus brought these consequences upon himself. On at least four occasions, the play’s suffering and bloodshed could’ve been avoided if Oedipus or his forebears had listened to advice. In this regard, the OceanGate Titan catastrophe is indeed tragic. Deliberate disregard for safety protocols, and the belief that money insulates rich people from calamity, led three CEOs, a teenager, and a technical crew to their almost-certain watery graves.

PR photo of the OceanGate Titan submersible

And the narrative induces horror and pity. If the passengers and crew weren’t killed instantly, the conditions of their now-likely deaths sound horrific. Trapped in a claustrophobic submersible, undoubtedly wearing urine-soaked clothes, and being slowly suffocated, this disaster has robbed them not only of their lives, but also their dignity, and even a marked grave. Like Oedipus, they arguably deserve their fate for rubbernecking at a mass grave, but their deaths are still pretty piteous.

Edit: after I wrote this essay, the U.S. Coast Guard announced they had identified the remains of the OceanGate Titan. It appears the submersible suffered a rapid structural implosion, and the passengers were, indeed, killed instantly.

Okay, but if the OceanGate Titan is a legitimate tragedy, doesn’t that describe the Libyan refugee ship? The hundreds of deaths are both horrific and piteous. Yet I’d contend they aren’t tragic, for one reason: there wasn’t much anybody aboard that vessel could’ve done. Their choices were limited to remaining in Libya, which has been anarchic and violent since the Obama Administration’s reckless intervention in the overthrow of Moammar Gadhaffi, or risk death at sea.

Oedipus, King Lear, and Jay Gatsby are tragic heroes, not only because they died horrifically, but because they’re responsible for their own deaths. Any of them could have, at any time, stopped events from happening. They perhaps didn’t realize the agency they possessed, but each one made choices which led directly to their own downfalls. If the captain of the refugee vessel misled refugees onto his boat, causing it to sink, that would be tragedy.

Aristotle believed that only kings, generals, and potentates had enough power to be responsible for their own deaths. I disagree. I’ve recently become a fan of horror fiction, and novels like Ania Ahlborn’s Brother and Catriona Ward’s The Last House on Needless Street probably count as contemporary Aristotelian tragedies. In both books, at least one character could’ve prevented catastrophe by asking questions, listening to advice, or not going with the flow like a dead fish.

So yes, on balance, I’d say the OceanGate Titan catastrophe is a tragedy. It meets Aristotle’s standards not only of narrative structure, but of audience reaction. We are, indeed, suitably horrified and pitying. The only question remains: will we learn anything? Will we respect safety standards, shake the illusion that money deflects consequences, and the dead aren’t for gawking at? Only time will tell. At least we can start reclaiming the definition of a “tragedy.”

Wednesday, March 8, 2023

Free Will and Determinism in Two Recent Horror Novels

This essay addresses the novels Brother by Ania Ahlborn (reviewed here) and The Last House on Needless Street by Catriona Ward (reviewed here). Be warned, this essay contains spoilers.
Ania Ahlborn, author of Brother

“For you, nothin’ matters,” Rebel Morrow bellows at his brother Michael, as we approach the culmination of Ania Ahlborn’s sixth novel, Brother. “You gotta have free will or some guts for shit to matter, and you don’t got neither.”

This is the closest Ahlborn comes to an out-and-out thesis statement for her novel. Michael tacitly acknowledges that, though he possesses free will, he’s never had gumption enough to use it. He’s always kept his head down to avoid Rebel’s unpredictable violent outbursts. This tactic isn’t unreasonable: around the book’s midpoint, Rebel drags Michael into a trackless forest and makes Michael dig his own grave, before suddenly, causelessly relenting.

Ahlborn’s novel shares thematic overlaps with Catriona Ward’s The Last House on Needless Street. Both novels address themes of inherited culpability, and the ways adults process the traumas of their childhood families. Both novels feature a character identified as “Momma” or “Mommy” who is, at once, terribly violent and remarkably absent from most of the narrative. Both show you can’t muscle through such traumas alone.

Michael Morrow survives tremendous abuse at his family’s hands. They not only whip him mercilessly for any transgression, real or imagined, they also force him to watch the similar punishments doled out on his sister, Misty Dawn. The latter is arguably more traumatic. After all, brave persons can stoically endure abuses poured onto their own bodies, but watching abuse poured onto others leaves people feeling helpless and despairing. Ask any child abuse survivor.

Constant abuse has left Michael conditioned to appease his abusers to survive. Again, Ahlborn addresses this, but doesn’t unpack it. For our purposes it matters that, as the novel commences, Michael is now nineteen years old: old enough, that is, to be legally culpable for his actions, and his inactions. This requires us to wonder what magical calendar date magically should’ve granted Michael maturity enough to resist his mistreatment?

Similarly, Ward’s protagonist, Ted Bannerman, survived something at his mother’s hands, though the novel mostly tiptoes around what. Unlike Ahlborn, who depicts Michael’s ongoing trauma in direct, brutal terms, Ward files Ted’s abuses in the sheltered past, where Ted can carefully avoid addressing them. Ted compartmentalizes his entire life to protect himself from understanding what Mommy did, and continues doing so, although Mommy’s been absent for years.

Catriona Ward, author of
The Last House on Needless Street

Ted creates an elaborate alternative narrative whereby he retreats into a world uncluttered by violence, pressure, or work. This alternative becomes so elaborate that it adopts its own personality, and ultimately displaces Ted altogether, at least periodically. We learn in the final chapters that Ted has invented a repertoire of alternate personalities, each equipped to handle different aspects of trauma. Having created them, though, he can no longer control them.

These alternatives, Michael’s constant mindfulness of every possible transgression, and Ted’s retreat into alternative reality, represent opposite responses to family abuse. Michael must remain permanently conscious of the traumas he endures, watchful to ensure he doesn’t do something that invites punishment. In the novel’s climax, however, we discover that Michael’s consciousness is finite, and Rebel has been hand-waving Michael’s (and our) attention from what really matters.

Therefore, Ted has arguably accepted something Michael cannot: that there’s nothing either can do to avoid this violence. Trauma is inevitable; our only reasonable response is how we handle it. Okay, so Ted’s approach isn’t necessarily productive, and possibly results in clothes getting hurt or killed. It certainly results in Ted living a diminished life. But it also results in Ted surviving, which is the important part for the child receiving the abuse.

The debate between free will and determinism, once a theological argument about humankind’s relationship with God, has become a secular psychological issue. Prominent public atheists like Sam Harris and John Gray contend that humans are driven entirely by deterministic systems which we can understand as essentially mechanical. They contrast this with a poorly defined “free will,” understood in terms of religious beliefs in the human soul.

Ania Ahlborn and Catriona Ward, neither of whom show any particular religious inclinations in their novels, temper these two extremes. In different ways, their novels indicate humans have sufficient liberty to make choices, can’t un-make them. Equally important, Michael and Ted make their choices as children, too young to be informed or responsible, then live with them as adults, when the choices are maladaptive but engrained.

Free will, these novelists imply, definitely exists. But we give it up, usually because we have to. Then we live with the mechanical consequences of that surrender.

Wednesday, November 2, 2022

Why Doesn’t Tarzan Have a Beard?

Johnny Weissmuller as Tarzan

Somebody presented this to me as a head-scratcher recently: why is Tarzan, who lives in the jungle and has never encountered a razor, clean-shaven? In saying “Tarzan,” of course, the asker meant Johnny Weissmuller, the gold medal-winning Olympic swimmer who played Tarzan in twelve feature films from 1932 to 1948. But seriously, the same applies to Buster Crabbe, Gordon Scott, and Alexander SkarsgÃ¥rd: Tarzan is portrayed without facial or body hair.

Weissmuller’s Tarzan remains the character’s iconic depiction, with the curved muscles and sleek skin of somebody who trained his body to resist water drag. But checking photos, I realize Weissmuller wasn’t just clean-shaven. His hair is also neatly barbered, slicked back in the “RKO Pictures Means Business” style that might, maybe, have reflected jungle sweat, but is clearly Brylcreem. Sure, apes groom one another, but it doesn’t look like that!

My immediate response was: same reason Elizabeth Taylor as Cleopatra has a fashionable bob. Because these movies are never really about what they’re about; they’re about the people who make and watch them. This stock answer could apply to countless historical or mythological epochs. Cinematic depictions of Hercules, Abraham Lincoln, Gandalf, or King Richard III always say more about us than about the characters.

Thinking about that answer, however, I’ve become increasingly dissatisfied with it. Weissmuller’s moderately muscled, glossy Tarzan isn’t a statement about the people who make or consume those movies, any more than the more absurdly muscled depictions of Hugh Jackman as Wolverine or Chris Hemsworth as Thor really reflect us. These characters aren’t who we, the audience, are; they’re lectures about who we, the audience, should be.

Abandoned from infancy, Tarzan grows to adulthood in an Edenic jungle politely untainted by ordinary old Black Africans. He innately understands Euro-American standards of personal grooming, fitness, and hygiene, which travel hand-in-glove with his instinctive ability to fashion tools and shelter. His ability to command animals is interesting, but incidental. His real accomplishment is bending the “wilderness” to suit his distinctly industrial-era demands.

Elizabeth Taylor as Cleopatra

Tarzan is the perfect colonial agent. He shapes nature to his expectations, but he also, on first encountering White people, recognizes their superiority, and longs for assimilation. Sure, in the movies he always returns to Africa, because if he ever permanently leaves, the franchise ends and RKO loses money. But once there, he consistently aids White imperialists and never once sullies himself with boring old Africans.

(I know, the movie with Alexander SkarsgÃ¥rd attempted to subvert this and make Tarzan more inclusive. That movie also tanked. All the perfumes of Arabia can’t wash the stink of colonialism off the franchise.)

Taylor’s Cleopatra tells a very different story. Released as the post-WWII generation hit adulthood, with the industrial excesses and pop-culture liberation that 1963 entailed, Cleopatra was no less a moralistic lecture. Surrounded by riches, adoration, and power, Cleopatra represented postwar American splendor. But she also represented deep distrust of powerful women. The movie repeatedly moralizes about how destructive imperial power becomes in feminine hands.

In 1963, women like Wanda Jackson, Lesley Gore, and even Elizabeth Taylor herself stopped accepting men’s shit. They demanded autonomy, which they weren’t always willing to state as explicitly sexual, though Taylor was already on her fourth marriage, age 31. Meanwhile, Cleopatra hit cinemas the same year Betty Friedan’s The Feminine Mystique dropped, pushing second-wave feminism into America’s mainstream. This wasn’t coincidental.

Cleopatra is presented as commanding, imperial, regal, but also doomed. The movie depicts her openly consuming male adoration, setting her own sexual terms, and demanding recognition. But we, the audience, know she’s already doomed. She’s going to embrace the wrong war, backed by the wrong allies, and will eventually choose suicide to avoid the ignominy of capture. We already know this, and implicitly, so does she.

Both movies arise from cultural contexts. Tarzan appeared, first in Burroughs’ short novels, then onscreen, as European empires in Africa and India were disintegrating, but America was establishing colonies in the Asian Pacific. Tarzan’s African jungle was transferable to American soldiers in the Philippine rainforests. Cleopatra subsequently emerged as women began challenging a male-dominated social order.

So no, I realize, these characters don’t really reflect us. Rather, they establish moralistic models for how we should or shouldn’t behave. Tarzan bespeaks the values of White empire, while Cleopatra warns about the perils of female ambition. Both characters serve a White male power hierarchy. One buys in, and is rewarded; the other rebels, and is punished. They aren’t us; they’re who Hollywood’s elite wants us to be.

Monday, May 16, 2022

The Bible as Literature vs. The Gospel as History

John Dominic Crossan, Render unto Caesar: The Struggle over Christ and Culture in the New Testament

How should Christians interact with politics and power? This isn’t a new question; Jesus’ own followers and opponents asked this question during his lifetime. But especially today, as Christian faith and Biblical language gets used to both support and oppose American political power, the so-called Nova Roma. New Testament scholar John Dominic Crossan has some ideas, apparently designed to make everyone, secular and religious alike, as uncomfortable as possible.

Crossan, a Biblical historian and laicized Catholic priest, is just one among several scholars currently active who have dedicated their careers to pursuit of the phantom dubbed “the historical Jesus.” I find Crossan preferable to most because he avoids the extremes of, on one hand, academic obscurantism, or on the other, populist luridness. He uses difficult scholarly language, but always explains terminology, and propounds ideas at length, but never windily.

That doesn’t mean he’s a middlebrow popularizer. Crossan left the clergy because he became persuaded that Scripture wasn’t inerrant, that extrabiblical sources were more trustworthy than the Gospels, and that Jesus’ divinity wasn’t literal. This puts him in the awkward position of being no longer Christian, in the conventional sense, but not secular or irreligious either. He has, by all accounts, reveled in this liminal duality, guaranteed to discomfort everybody.

This volume is, I’ll admit, one of Crossan’s less accessible. His title is somewhat misleading; he addresses the “Render Unto Caesar” account, common to the synoptic Gospels, early on, then largely abandons it, more interested in a global rather than particular inquiry into First-Century political theology. Rather, he addresses, in sweeping terms, the question of how Christians could, and did, accommodate the pressures of assimilation with Roman authority.

Crossan’s first part addresses the Revelation of John. Though Crossan’s career has largely avoided apocalyptic themes, he presumably couldn’t avoid this, because the Revelation’s explicitly political, anti-Roman sentiments have been used recently by end-times theologians and political tub-thumpers. The Revelation’s violent anti-statist sentiments have often given more moderate Christians the willies, for reasons Crossan explains succinctly. This book’s triumphalism often contrasts with Gospel themes of nonviolence.

John Dominic Crossan

Then Crossan transitions into a comprehensive look into political themes in the Luke-Acts duology. Here, I admit, I nearly stopped reading. Crossan does something I haven’t previously seen in his mass-market books: he chases a rabbit trail. Crossan spends entire chapters, plural, determinedly proving that Luke the Evangelist (whom he believes isn’t Luke) wrote Luke-Acts as a single integrated work in two volumes, not two separate narratives, as sometimes postulated.

This intricate, granular analysis proves ultimately relevant to Crossan’s point, but only at some length. He presents the duology as literature, not history (or pseudo-history), and seeks the “main character” rather than defining themes. Though this message proves ultimately relevant, it does so in ways most readers won’t find relevant to spiritual inquiry or personal growth. This passage is scholarly and abstruse, in ways Crossan’s mass-market work usually avoids.

Finally, Crossan breaks down Jesus as he appears in the Jewish-Roman historian Josephus. This part most accurately reflects themes found throughout Crossan’s corpus, that external sources are more trustworthy than canonical Scriptures, because they aren’t colored by Christological purpose. This makes sense from Crossan’s “historical Jesus” premise, which seeks philosophical rectitude without recourse to divine revelation. Yet I find this passage least satisfying, because it seems entirely self-directed.

By Crossan’s own admission, Josephus mentions Jesus barely ten times, John the Baptist only once, and early Christian missions never. Crossan supplements this paucity with references from other contemporary sources, like Philo of Alexandria. The ultimate outcome, though, is a Jesus grounded in history, but absent of message, a complete historical cypher. This Jesus is arguably better because he makes no demands, and receives, rather than distributes, a moral core.

Don’t misunderstand me: Crossan isn’t wrong, in the factual sense. When he spotlights the contradiction between Revelation’s violent, standoffish politics, and Luke-Acts’ more accommodationist stance, he acknowledges a friction many devout Christians have wrestled with. Crossan is palpably uncomfortable with the Gospels’ laissez-faire attitude toward historicity. He wants a factually ironclad Gospel, free from contradiction. Let’s be honest, so do seven-year-olds in Sunday School.

I think Christians probably should read this book. I believe in balanced libraries, not balanced books; though Crossan’s analysis will disappoint believers and skeptics alike, some spiritual frustration is often beneficial. Just realize going in that Crossan’s analysis reflects John Dominic Crossan, not his elusive “historical Jesus.” Crossan looks for answers, but astute readers will emerge with the real goal of learning, that is, better questions.

Tuesday, December 21, 2021

Rudolph the Dog-Earred Stereotype (Part Two)

Santa and Rudolph in Videocraft International's 1964 stop-motion Christmas special

Three years ago today, I wrote an essay entitled Rudolph the Dog-Earred Stereotype, claiming that the Christmas classic “Rudolph the Red-Nosed Reindeer” was a call to action against Hitler. It went largely unnoticed at the time. Last week, I linked that essay on a popular meme-sharing group on Facebook, and it exploded. In under eighteen hours, it received three times as many hits as it received in the previous three years.

It also received several comments, many of them hostile. Nothing energizes Netizens more than having their opinions challenged, and apparently that’s what I did. The objections fall into three basic categories, which I will address in no particular order. Here goes.

• “Santa isn't enabling ablism, he's enabling antisemitism isn't the defense I think it was meant to be”

I never said Santa was enabling antisemitism, I said America in 1939 was enabling antisemitism, and Robert L. May, Rudolph’s creator, was calling on America to take a stand. Moreover, America definitely enabled antisemitism; the same year May wrote the first Rudolph coloring book on contract for Montgomery Ward, America turned away a shipful of German Jewish refugees, most of whom eventually wound up in Germany’s Holocaust camps.

It bears emphasizing that bigotry seldom begins at ground level. Scholars of race, history, and law, like Ibram X. Kendi and Michelle Alexander have demonstrated that bigotry doesn’t cause discriminatory policies, but rather, discriminatory policies cause bigotry. If “all of the other reindeer” were mocking and excluding Rudolph, it was because Santa, atop the North Pole power pyramid, created that environment. Thus it was on Santa to change it.

If audiences limit ourselves to reading art on the surface, we always miss the meaning. Yes, superficially, Santa encouraged bigotry against Rudolph. But Santa isn’t just Santa, he stands for powerful people everywhere: heads of state, religious leaders, media figures deciding which stories deserve reporting. Storyteller Robert L. May wanted these powerful people to change their harmful policies, hopefully before that catastrophic “foggy Christmas Eve.”

• “This is fuckin [sic] dumb. Why not write a book for adults who might actually get the point (if that was actually the case)”

I dunno, man, why do any authors ever write about important topics for children? Why did Madeline L’Engle’s A Wrinkle In Time address spiritual malaise in a technological age? Why did Lloyd Alexander’s The Chronicles of Prydain deal with World War II issues in a medieval Welsh setting? Why did Suzanne Collins deal with economic inequality, climate change, and resistance to unjust authority in The Hunger Games?

Perhaps because children, from an early age, demonstrate a strong sense of morality and fairness. If we’ve learned nothing from the last two years, we surely agree that adults perform elaborate moral contortions in order to defend selfish interests while still thinking of themselves as good people. We’ve watched grown-ups use religion, pseudoscience, and a truncated form of history to justify racism, closed-door nationalism, and spreading a plague unchecked.

Children, historically, have used their innate sense of justice to goad adults into right actions. As Elizabeth Hinton writes, many acts of resistance to institutional racism during the peak Civil Rights Movement began in high schools and colleges. Media pundits and defenders of the status quo still use the stereotypical college liberal as their catch-all demon, because youth have strong moral codes, undistorted by economic pressures and adult cynicism.

This doesn’t mean children always know, or do, the right thing. On the small scale, grade-school students already show hostility to people groups whom adults around them shun or belittle. On the large scale, historian Jill Lepore records how some late hippie-era college activism crossed the line into light-beer Stalinism. That’s why children should read literature with a strong moral backbone, to help steer and modulate their instinct toward fairness.

• “tl;dr”

That’s on you, dude. Most of my blog essays run 750 words, which anyone with a healthy attention span should be able to read in about three minutes. I keep things brief because screen reading attenuates people’s willingness to stick with long or detailed content. As the above arguments indicate, I didn’t explain important points in enough detail; believe me, I could go much, much longer than I usually do.

In conclusion, I don’t expect my essays to change people’s deeply entrenched beliefs. Readers will often respond to challenged beliefs much like they’d respond to someone attempting to chop off their arm. But I do expect mature audiences to read what I said, not what they wish I said.

Saturday, July 10, 2021

Some Thoughts on the “Cat Person” Controversy


In September of 1989, an episode of sitcom Family Matters featured Rachel, a young author, selling a manuscript based on her brother’s family. Her brother’s family recognize themselves in the story, and are outraged by the negative depictions. They perceive Rachel as highlighting their worst characteristics for a hasty sale. Rachel spends the episode explaining how fictioneering works, and gradually making peace with her brother’s extended family.

For context, this was Season 1, Episode 3, so something the writers’ room presented early. Presumably they wanted to remind their own families that, though they used personal experiences as story foundations, they didn’t mean anything literally. Something relevant must’ve happened in Hollywood around that time, because six months later, a Golden Girls episode featured Blanche spitting outrage when she saw herself depicted in her sister’s Jacqueline Susann-like sexploitation novel.

Kristen Roupenian’s “Cat Person,” a 2017 New Yorker story about a dysfunctional May-late August relationship, regained currency this week. Alexis Nowicki, a Brooklyn book publicist, published a Slate article claiming the story appropriated significant portions of her life, including her college relationship with a much older graduate student. This despite, Nowicki admits, never meeting Roupenian. The Twitter commentariat started circling like buzzards, as it does.

The occasional bomb-thrower claims Roupenian transgressed somehow by using somebody else’s story. I even saw the word “plagiarism” misapplied. But these were outliers, the people Mick Hume calls “full-time professional offense takers.” The largest fraction of audience responses involved wisecracks about the appropriateness of Nowicki’s response. Since when, these respondents (many of them writers) asked, is it wrong or inflammatory for writers to fictionalize other people’s stories?

My favorite responses to Nowicki’s article ask: what responsibility do writers have in fictionalizing stories that aren’t theirs? We all, as writers and readers alike, understand humanity through other people, and therefore process difficult or morally ambiguous situations through stories. When our personal experiences don’t provide the necessary narratives, we turn to others’ stories. We all do it, but what moral onus does it place upon us?

Ernest Hemingway

In high school, my teachers frequently treated fiction and poetry like crossword puzzle clues. American Literature classes particularly repeated this. Authors, especially Hemingway and Fitzgerald, were treated like cypers from the past. Like Ralphie in A Christmas Story, we needed to apply our decoder rings to unlock the concealed meaning, which was always one-to-one. Literature wasn’t presented as art to mull over and live with, but a message to understand.

Perhaps you’ve heard that wheezy advice frequently given to young authors: “write what you know.” We assume that authors only create work by fictionalizing their personal experience, then we reverse-engineer such experience onto others’ work. I’ve always distrusted this advice, since it implies you can’t understand anybody’s experiences except your own. After all, Stephen Crane didn’t need to fight in the Civil War to write The Red Badge of Courage.

According to Nowicki, Roupenian admitted writing her story in an MFA workshop, and sending it into the New Yorker slush pile, where literary fiction usually goes to die. For whatever reason, it struck a chord, and the editors bought it, propelling a young graduate student into the ranks of luminaries like David Sedaris and Jamaica Kincaid. Then, because Roupenian addressed emotionally loaded topics, her story quickly became a lightning rod for cheap online outrage.

I’m reminded of 2016, when online critics, including friends of mine, willfully misread a Calvin Trillin ditty. Or January, 2020, when firebrands responded to an Isabel Fall story, without apparently reading past the title, with such ferocity that the author needed psychiatric treatment. In each case, audiences “decoded” literal, singular messages from these works, like reading Hemingway in 11th-grade AmLit, giving the works one, singular, invariable meaning.

Worse, like Rachel’s family or Blanche, these audiences seek the most negative, adversarial meaning. They project their insecurities onto, not the work, but the author, attributing malicious intent rather than the desire to write compelling fiction. The Family Matters and Golden Girls writers’ rooms shared this premise, presumably because their own families thought they’d recycled family drama. Which perhaps they did, but not malevolently.

Alexis Nowicki, like those TV families, wanted everyone to know her story wasn’t as toxic as that depicted in fiction. She recalls her relationship with “Charles” fondly, but distantly, like many college relationships. But like those families, Nowicki’s self-defense imputes authorial intent onto Kristen Roupenian. She sees Roupenian’s writing not as art, but as journalism. In doing so, she diminishes art’s psychological impact, and makes other minds less real.

Wednesday, October 14, 2020

What Does “Law” Mean to the Powerless?

Richard Delgado and Jean Stefancic, Critical Race Theory: an Introduction, Third Edition

You probably discovered Critical Race Theory recently, like I did, through the news. For some reason, in the late summer of 2020, right-wing media pundits rapidly coalesced around Critical Race Theory as a major force undermining stable American government, threatening to throw society into hasty disarray. As often happens with media-driven moral panics, the great spokespeople never defined the monster they inveighed against. They just screamed bloody murder until their complaints seemed mainstream and commonplace.

University of Alabama law professors Richard Delgado and Jean Stefancic wrote the first edition of their introductory textbook during the Clinton Administration, when CRT (as they abbreviate it) was emerging from the scholarly circles which created it. The third edition dropped as the Obama years transitioned into Donald Trump, which presumably isn’t coincidental. But calling it a “textbook” makes it seem falsely imposing and threatening: it’s little more than a pocket-sized pamphlet with discussion questions.

“Critical theory” carries the whiff of philosophy and important literature. But CRT originated in law schools in the 1970s, and by the 1980s it spread rapidly throughout the social sciences. It dealt specifically with the interaction between law, as an ideal of social order, and the ways ordinary Americans lived their lives. (Its origin was specifically American, though it eventually spread internationally.) That interaction, CRT’s proponents insist, is often harsh, lopsided, and laden with friction.

Delgado and Stefancic trace CRT’s origins to Derrick Bell, the Harvard Law School professor who influenced both Barack Obama and Ian Haney López. Bell asked pointed and timely questions about how the law, an abstract and theoretical entity, interacted with non-White citizens at ground level. Then, having asked such questions, he began proposing answers. His solutions remain often controversial, partly because they involved forms of direct action that had fallen on disfavor by the 1980s.

However, anybody who’s ever studied critical theory knows that no theory is ever unitary and self-contained. Just as Professor Bell proposed important interpretations of law, others, including Haney López, and even Delgado himself, identified other interpretations, valuable inconsistencies, and questions beneath questions. Therefore, this text doesn’t present a list of closed debates and clearly defined right answers. It describes the controversies which define CRT, and the interaction between race issues and the law in general.

Our authors give very brief summaries of what they call CRT’s “hallmark themes,” which largely orbit two questions: how ingrained is racism in American law, and what policy-based remedies should America undertake? In later chapters, they delve into issues like how personal empathy, despite its well-meaning, doesn’t address underlying structural issues, and how the narratives which minority citizens provide can steer policy debate. They also question how to address the partisan backlash CRT has received.

Richard Delgado, left, and Jean Stefancic: official University of Alabama photos

Right-wing opposition to CRT probably derives from two points which Delgado and Stefancic explore. First, CRT assumes that racism is fundamentally constructed into American social structure, and therefore cannot be expunged by persuading individuals to not be bigoted. In other words, racism isn’t an individual behavior or character failing, it’s written instrumentally into American law. Fixing American racism will mean major structural alteration to American law— and exactly what structures need altered isn’t necessarily clear.

Second, CRT is fundamentally deterministic. Yes, our authors use the word “determinism” frequently, a term pinched from philosophy, though they cite legal scholarship to justify it. That is, our authors believe, as CRT does, that human will is circumscribed, and our choices are determined by economics, social pressures, and other external forces. Therefore, people cannot simply choose to obey the law, especially when the law opposes the external forces which determine your scarce available choices.

Again, the authors intend this book for classroom application, though it’s written in non-specialist English and reasonably priced. Therefore, they include discussion topics and classroom exercises. Unlike the main text, the classroom exercises seem more pointedly partisan, written to steer participants toward a determined outcome. As a sometime teacher, I feel squeamish about these exercises, and would probably write my own. However, the provided exercises provide a useful jumping-off point for self-scrutiny and personal study.

CRT isn’t only one subject, our authors remind us. It’s a rubric for historical questions from the latter Civil Rights era, and a guidepost into important burgeoning questions about the differing needs of different races, for instance, or where LGBTQIA+ concerns overlap race. As the title implies, this brief ledger only introduces a more complicated topic. However, for anyone interested in the friction between power and people, it provides a compelling synopsis for future studies.

Friday, October 9, 2020

Will the Real Arthur Dent Please Stand Up?

Simon Jones as Arthur Dent in 1981

I re-watched the 1981 television adaptation of The Hitchhiker’s Guide to the Galaxy for the first time since VHS days recently, and something struck me about Arthur Dent. Through the years, I’ve read him described as the story’s protagonist, antihero, viewpoint character, or voice, but importantly, never as the hero. Watching it again, it seemed glaring, what a wet rag Arthur Dent is. One wonders how such a gormless character can have such lasting appeal.

Simon Jones played Arthur Dent on TV, and also radio, audiobooks, and occasionally onstage. Writer Douglas Adams reputedly wrote Arthur specifically for Jones, with whom he performed in the Cambridge Footlights; and Jones ducked into the role, off-and-on, for twenty-five years, from 1978 to 2003. Only thirty-one when he played Arthur on TV, Jones is tall and good-looking, with broad shoulders and lustrous hair; yet he makes Arthur look childlike, a black hole of charisma.

Contrast Martin Freeman, who played Arthur in Garth Jennings’ 2005 HHGttG movie. Freeman is almost studiously average: in height, looks, body type. Freeman’s Arthur is a mildly altered version of Tim Canterbury, the role from The Office that made Freeman famous: amiable and kind, but largely forgettable. Yet, like Freeman’s other major big-screen role, Bilbo Baggins, this Arthur grows into his newfound role. He starts relishing confrontations with aliens, and eventually embraces the hitchhiking life.

Perhaps these different Arthurs reflect their creator, Douglas Adams, a man who struggled with identity throughout his career. In the introduction printed in most omnibus editions of his HHGttG novels, Adams describes conceiving the story’s first core while hitchhiking aimlessly around Europe after college, a nominal adult nevertheless lacking direction, both literally and figuratively. The story only took form, though during a period of prolonged pessimism, when he wrote it specifically to destroy planet Earth.

Adams’ original Arthur reflects both Adams’ frequently purposeless life, and Britain a generation after World War II. Witnessing its global empire enduring its death throes, Britain produced new literary heroes: venal monsters fighting international tyranny, like John le Carré’s George Smiley, or pathetic, nebulous, frequently comedic entities like Kingsley Amis’ Lucky Jim. Clearly, Arthur Dent falls into the latter category: his individual wandering and lack of direction reflects the United Kingdom’s national sense of futility.

Martin Freeman as Arthur Dent in 2005
For Arthur, in the original radio and TV series, this directionlessness manifests as frequent whining. “I seem to be having tremendous difficulty with my life-style,” Jones’ Arthur mewls, almost directly into the camera. Presumably, Adams’ first-generation British audience would’ve understood why that complaint mattered—and also why, when that complaint passed into a wormhole, it would’ve offended two warlike races preparing for battle. Because such a whimper certainly would have offended Brits of Churchill’s generation.

Notably, that whine doesn’t appear in the 2005 movie. Though the movie dropped four years after Adams’ passing, Adams was involved in the movie’s production for twenty years, and the final production used a lightly doctored version of Adams’ own script. (The movie’s development hell caused Adams’ lack of productivity through the 1990s.) Presumably Adams himself excised that moment from the story. It reflected a moment long past, both for Britain generally, and Adams personally.

My theory is: Adams himself changed. Adams cranked out two radio series of HHGttG and four novels in quick succession, from 1978 to 1984, then went suspiciously quiet. Notoriously bad at deadlines, Adams only produced the fifth novel eight years later, in 1992. That novel, Mostly Harmless, is markedly different from the previous four. Largely dry, often angry, and lacking Adams’ trademark wordplay, it features Arthur living mostly in one place, his hitchhiking days over.

It also features Arthur raising the teenage daughter he never knew he had. Adams married in 1991, and though his only daughter wasn’t born until 1994, surely he understood fatherhood was a possibility. Forced to take stock, Adams probably decided that wandering through life, lacking purpose and goals, wasn’t acceptable anymore. A lifelong atheist, Adams couldn’t derive purpose from transcendence; therefore he had to manufacture it internally. Aimless wandering became a journey towards a destination.

That, I believe, conditions these two different Arthurs. Simon Jones’ Arthur reflects a new Britain, born of wartime conditions, trapped in protracted adolescence. Martin Freeman’s Arthur reflects Douglas Adams specifically, a man who got away with living like a teenager well into his forties, suddenly accepting adulthood. Had Adams lived, one wonders how the character would’ve continued evolving: Adams would be pushing seventy now. We, his audience, have the opportunity to continue evolving with him.

Friday, March 27, 2020

The Female Monster In Your Closet

Sady Doyle, Dead Blondes and Bad Mothers: Monstrosity, Patriarchy, and the Fear of Female Power

The monster under your bed exists specifically because you expect it to. Let’s start with that strange, paradoxical belief. But what fuels your expectations? Are your fears uniquely your own, or are they conditioned by social forces you haven’t even consciously processed yet? The answer to that last question is painfully obvious, especially if you’re already reading literary criticism. But what forces condition our private monsters?

Journalist and pop-culture critic Sady Doyle has written extensively about the ways society treats women who step outside their social roles. Her second full-length book deals specifically with how women shape, and are shaped by, images of horror in pop culture. This begins with literal horror, like the feminine monstrosity in The Exorcist, but extends beyond, into how we discuss crime and violence, the popularity of “true crime,” and how we talk to women.

To speak of feminine monstrosities, Doyle divides femininity into three overarching roles: the Daughter, Wife, and Mother. Each role represents a female archetype that women (or girls) should honor, and when they don’t, we perceive them as monstrous. The little girl transitioning to motherhood, the rebellious and willful wife, or the negligent mother, all permeate our culture’s nightmares. But why, and what does that say about us?

Common criticisms of sexualized violence in slasher films, to give one example, notice that teenage girls are doomed the moment they acknowledge their sexual nature. To kiss or, worse, to actually have sex, becomes a death sentence. But Doyle also acknowledges a contradiction: before the middle 1990s, the major slasher audience was teenage boys. Now, it’s mostly teenage girls. What changed, in our culture’s fears, to popularize slashers this way?

Doyle uses the word “patriarchy” generously. Veteran lit-crit readers know this word has loaded implications, depending on who uses it and how. Doyle means a society organized according to male, hetero-normative standards, which slots people into social roles according to their assigned-at-birth genders. Our society reproduces roles which people have to either accept or resist, and for women, that means literally reproducing roles.

That titular “Dead Blonde” can, depending on how stories handle her, represent male fears of women exceeding their roles, as when young girls transition into women. Or it may represent women’s fears of patriarchal violence: consider Drew Barrymore in the first Scream movie, literally trapped inside her house, knowing that stepping outside the kitchen means death. Thus the same image can reinforce, or ironically resist, patriarchal culture. What matters is violence.

Sady Doyle
But the “Bad Mother” has proven more obstinately resistant to ironic subversion. From Norma Bates and Pamela Voorhees, to Jordan Peterson and the language of online discourse, patriarchal culture persists in blaming women, particularly mothers, for men’s violent behaviors. Doyle keeps circling back to Augusta Gein, whose bad mothering, always somehow considered in isolation, supposedly motivated her murderous son Ed.

In fairness, though Doyle’s arguments create an eye-opening context in which to read patriarchy’s fear of women, she sometimes seems unaware of her own lens. Reading slasher films as a totalized phenomenon, for instance, overlooks particular examples. Freddy Kreuger is often as hostile to men as women, especially closeted men. The Exorcist subliminally condemned puberty, but The Exorcist III (the only good sequel) condemned male religiosity.

Careful readers, going through Doyle’s exegesis, can find examples where she overlooks the obverse of her position. If patriarchy forces women into artificial, constraining roles, then aren’t the roles reserved for men equally constrictive? What about intersectional roles? White stay-at-home mothers are lionized, while paid childcare workers, Doyle admits, are despised, and often given sub-poverty wages. How, then, does class influence gender roles?

I say this, knowing that all writers make choices about what to include. At nearly 250 pages plus extensive back matter, this book is about as long as most literary criticism written for general audiences these days. Anything longer would’ve made imposing reading, especially on a topic as difficult and morally taxing as horror and violence. Doyle must have known how much she necessarily omitted to create a book audiences would actually read.

Therefore, accepting that it looks at one side of a multifaceted puzzle, this book makes a worthwhile prolegomenon to further examination of how our society uses tacit violence to reproduce social roles. When the monster forces us back into the kitchen, or back into the closet, it doesn’t do so in morally neutral ways. Public art can display our public aspirations, but it also showcases our public fears. Far too often, our most visible monsters are female.