Saturday, December 30, 2023

Cowboys and the Power of Storytelling

From left: Jaimz Woolvett, Morgan Freeman, and Clint Eastwood in Unforgiven

I saw Clint Eastwood’s Unforgiven in the cinema when it was first released in 1992, and loved it. Then I didn’t watch it again for thirty years, until this week. Given my family’s conservatism, I grew up surrounded by Westerns, especially John Wayne and James Arness, but my parents specifically exempted Clint Eastwood. My mother disparaged Unforgiven as, in her view, a return to his youthful form of violence for its own sake.

Then as now, I felt Mom misunderstood what happened. When it came to dispensing actual violence, Morgan Freeman’s character, Ned Logan, can’t actually stomach it; he’s grown a conscience in his old age. Eastwood’s William Munny feels every human emotion while sober, and can only become the killer he once was after numbing himself with alcohol. Their ally, the self-proclaimed Schofield Kid, likewise feels every inch of pain.

Rewatching Unforgiven in adulthood, however, I noticed another theme which teenaged Kevin missed. The characters spend remarkable swathes of time telling one another stories. From the moment the Schofield Kid enters Munny’s homestead, he demands to know which among the many legends of Munny’s violent exploits are true—legends which the Kid repeats giddily. The Kid enjoys stories he’s heard, and wants to become a story himself.

Almost simultaneously, the contract enforcer “English Bob” enters Big Whiskey, Wyoming, accompanied by his official biographer. This author, Beauchamp, almost matters more than English Bob himself. Beauchamp has dedicated himself to setting English Bob’s exploits in print, preserving Western storytelling for an Eastern audience thirsty for lurid adventures. When Beauchamp discovers English Bob’s stories are fabricated, he drops his ally and attaches himself to another storyteller.

Beauchamp is himself a fabulist, who uncritically repeats stories of White men dispensing karmic justice, protecting innocent (White) women, and taming a land wrested from savage Indians. He describes a world where judicious applications of White male violence bring order to a putatively disorganized land. His character’s entire point, of course, is that he abandons his preferred story when Sheriff Daggett humiliates English Bob and offers an alternative story.

Much of what we believe we know about the American West comes from fabulists like Beauchamp. People who survived the West told their stories to an uncritical penny press, which devoured their frequently ridiculous memoirs. We remember Wyatt Earp, and forget his arguably more accomplished brothers, because Wyatt survived and had a press agent. Buffalo Bill’s Wild West Show sold an almost entirely fake account of western settlement.

These characters and their hyperbolic stories made good fodder for the nascent film industry. Early stars like Tom Mix, who pioneered the white-hat cowboy mythology, presented a world of moral absolutes, swift civilian justice, and libertarian freedom. Mix bequeathed the reins to similar morally unambiguous performers like the “singing cowboys,” Gene Autry and Roy Rogers, then to the cowboys my parents loved, Marshall Dillon and Wayne’s Rooster Cogburn.

We mustn’t forget, however, that these performances served a social role. Tom Mix corresponded with the rising social tensions which preceded World War I. Rogers and Autry sang their sermons during the Great Depression, while John Wayne and James Arness flourished during the Cold War. John Wayne arguably kept trying to re-fight the early Cold War well into his seventies, limping along, mortally wounded by exposure to nuclear test fallout.

Owen Wister’s genre-defining Western, The Virginian, begins not with character or action, but with a preface lamenting the disappearance of cowboys. The cowboy, to Wister, represents an absent ethic in American life, a moral purity unadulterated by civilization’s decadence. Just as Homer believed true Greek greatness ended with the Mycenaeans, and Arthurian romance locates chivalry among knights of yore, Westerns imply American greatness happened “back then.”

The spaghetti Westerns which made Eastwood’s career, with their moral ambiguity and their casual brutality, arose as the Cold War dragged on interminably. The Italians who made these movies, including Eastwood’s mentor Sergio Leone, witnessed firsthand how flag-waving stories of bygone national glory looked pale against events actually occurring in Europe. They presented a counter-narrative of the cowboy West as brutal, amoral, and already dead.

By 1993, however, even that counter-narrative had become disappointing. Eastwood presents the differing stories of America’s West—the Kid’s romantic savagery, Beauchamp’s redemptive violence, Daggett’s tales of law and order—as equally disappointing. All characters, in this West, wind up equally lonely, aiming for the same cold clay. “The Wild West,” this movie acknowledges, never existed; it was a story we told ourselves. Like all stories, it has to end.

Friday, December 29, 2023

Confess Your Crimes in Sand and Blood

Eric Jager, The Last Duel: A True Story of Crime, Scandal, and Trial By Combat

One rainy afternoon in January 1386, a crime transpired in the undefended castle overlooking the sleepy French hamlet of Capomesnil. Exactly what happened, and why, jeopardized the French legal system. Jean de Carrouges, a knight with a reputation for stroppy behavior and no aptitude for court intrigue, claimed his rival invaded his mother’s nearly abandoned château and savaged his wife. The rival, Jacques le Gris, a talented courtier and squire, denied everything.

UCLA medievalist Eric Jager stumbled upon the Carrouges casewhile researching another project. It struck his imagination because Carrouges’ accusations against le Gris escalated into violence. Not Red Wedding-ish violence that pulls audiences into mass media, but France’s sluggish late-medieval justice system. Carrouges, a minor aristocrat himself, believed his feudal liege ignored the crime for reasons of court politics, and appealed for a rare option: a “judicial duel.”

Jager reconstructs the events preceding this exceptional outcome—which would, though nobody knew this then, be the last trial by combat authorized by the French monarch. He also provides a guided tour through a distant nation that, if any ever has, deserves the name “foreign.” Though Jager name-checks places you could visit today, like Paris, Bordeaux, and England, the standards and traditions circumscribing everyday life differ wildly from ours.

Jean de Carrouges was an accomplished warrior, descended from accomplished warriors, amid the interminable slog of the Hundred Years’ War; he was knighted for distinguished combat services. But he proved a lousy courtier, ill-suited for house politics and prestation. Carrouges ascended quickly through the court of Count Pierre of Alençon, then fell equally quickly. His easily bruised honor required frequent satisfaction, and he burned bridges faster than he built them.

Jacques le Gris lacked Carrouges’ pedigree, but proved more adept at court politics. He and Carrouges began as allies, but as le Gris out-earned Count Pierre’s favor, Carrouges felt himself slighted. The two squires (before Carrouges’ knighthood) intermittently fought and reconciled. But by early 1386, Carrouges discovered his newly-minted knighthood meant nothing at court, and the courtiers found themselves irreconcilable. LeGris swore revenge on Carrouges’ household, and targeted his wife.

Eric Jager, Ph.D.

Reading this book, it’s impossible to miss how Carrouges’ world differs from ours. The government received its power from inheritances, not merit, and lesser courtiers received advancement based on personal connections, not competence or hard work. (Okay, maybe not so different from ours.) Because King Charles VI supposedly received his crown directly from God, “justice” meant whatever dribbled from the king’s lips, which immediately became holy writ.

Carrouges’ world turns on two liabilities: war and distance. Kings fight other kings, not to achieve any advantage, but because it’s what they do. Lesser nobles gain whatever limited fraction of power they possess by furthering that goal. Meanwhile, before motor vehicles and mass transportation, distances were truly huge. Twenty miles is an arduous overland slog which, depending on weather, requires a commitment of days; even light war requires months.

Therefore, Carrouges’ decision to appeal his lawsuit to Paris is no small obligation. But rape, it seems, was no small accusation, either. Despite the recent trend in “grimdark” medieval fantasy pitching rape as banal in feudal society, Jager notes that it remained a capital offense, at least among the nobility. Violating a titled lady’s virtue jeopardized the assurity of legitimate offspring, which threatened the entire system of male agnatic primogeniture.

Feudal hierarchies, in Jager’s telling, appear remarkably brittle. The state remains stable only while lords and vassals accept their place in the hierarchy and serve the aristocratic state. Therefore, whatever happened in Capomesnil in 1386 threatened the entire social order, since someone transgressed their roles. This is the archetypal “he-said, she-said” case, as nobody but le Gris and Lady Carrouges (and le Gris’ man) saw what actually happened.

When logical arguments fail, two trained warriors resolve their differences with weapons. But again, in defiance of paperback fiction, this is hardly an outburst of premodern savagery. A judicial duel required ceremony and strictures that make today’s courtrooms seem loosey-goosey. When we discuss modern courtrooms as “level playing fields,” we copy the rules of the dueling ground, tightly controlled to ensure nobody but God granted either combatant an advantage.

History records who, exactly, won this duel (though Jager plays coy). What matters more, in Jager’s telling, is reconstructing the world in which this event occurred, a world where invested noblemen believed their battles bespoke God’s favor, and might literally made right. A world where justice is both necessary, and costly. A world violently different from, yet also surprisingly much like, our own world.

Thursday, December 28, 2023

Life In These “United” States

The Texas State Capitol, in Austin

Texas is threatening to secede from the Union, because that worked so well the first time; and countless progressive Americans are laughing. Interesting how they didn’t laugh so loudly during the Trump Administration, when Left Coast progressives threatened to enact “Cal-exit,” or California seceding from the union. It’s almost like, whatever party controls the White House, states controlled by the other party want to leave the country altogether.

California is considered so staunchly Democratic, and Texas such a Republican bastion, that journalists regularly call both states’ Presidential outcomes before any voting precincts report in. Yet both states consistently split by less than ten points. If Texas seceded to mollify conservatives, millions of progressives would find themselves foreigners in their own nation. The reverse applies in California. Therefore, even if secession were possible, it would be wildly impractical.

I’ve written before that America’s state lines are dangerous and make little sense. Drawn entirely in the 18th and 19th Centuries, these divisions have become liabilities in the 21st Century. Growing populations, changing demographics, and advanced technology have packed dense numbers into absurdly small spaces, while massive acreages go unused. It’s become de rigeur to moan that tiny, sparsely populated Wyoming has the same Senate representation as massive California.

Except, I’ve recently realized there’s an additional wrinkle. Wyoming, the least populous state in the 2020 Census, is more populous than the second-most populous state in the 1790 Census, Pennsylvania. Wyoming, sometimes derided as tiny, might’ve seemed crowded and buzzing to the Constitutional Convention in 1787. The Founding Fathers, mostly farmers (or more accurately, plantation owners), couldn’t have imagined our dense urbanization.

We call our nation “The United States of America” because the Founders envisioned a loose affiliation of independent political units. Americans often say “states” the way other countries say “provinces,” but in poli-sci parlance, a “state” is a top-level, independent polity with a central government, and the ability to write and enforce laws. In casual conversation, Americans describe such polities as “nations” or “countries,” also words with different formal definitions.

The "Sower" statue atop the Nebraska
capitol building reflects the state's
agricultural heritage

The Founders invested principal power in states, and considered the federal government only latterly, to enforce standardized trade and foreign policy. Even Thomas Jefferson, the third President, esteemed the federal government so lowly that he didn’t include his Presidency on his epitaph, which he wrote himself. This level of local autonomy turned sour, however, resulting in the Civil War. Afterward, the federal government began coordinating law and justice nationwide.

This prompts the question: do states with fixed borders and lawmaking authority even serve any purpose today? Even after the Civil War, states continued serving some legal function, since government acted at the speed of paper. The early telegraph and overland railroad expedited some government functions, sure. But in our digital age, where information blasts across the country and into our homes instantaneously, do we still need states?

Municipal and county governments remain useful. Local law enforcement can identify individual malefactors (pause briefly the question of whether we like the police), and local officials can make on-the-ground decisions about, say, road maintenance and urban development. But states, which merge multiple regions under one umbrella often built 150 years ago, have become battlegrounds for what forms of injustice we’ll willingly accept. That includes staunchly partisan states like mine.

Somebody might respond by stating that state governments coordinate regional and municipal governments. I answer: do they? Nebraska, where I live, is notorious for its chronically neglectful state government. The state capitol, Lincoln, frequently doesn’t care what happens in rural areas, or anything happening more than a two-hour drive away. The state government regularly disregards over half the state, focusing on the prestige-heavy Interstate 80 corridor in the eastern half.

If my state government disbanded tomorrow, it might take months before half a million Nebraskans cared, or even noticed. I’ve heard similar complaints, voiced informally, from residents of upstate New York, inland California, or the Tennessee mountains. States regularly abandon their poorest, least represented residents for the prestigious urban, industrialized regions. This abandonment often goes unreported, since media also ignores poor and rural people, but it definitely happens.

Disestablishing or reinventing state governments won’t magically fix ills, don’t misunderstand me. We’ll face massive conundrums, like how to apportion the Senate (or abandon it), and we’ll probably also have to revamp the Executive Branch. In the near term, abandoning state government will create as many problems as it solves. Yet we must reconsider, sooner rather than later, our 18th Century government structure in our 21st Century society.

Wednesday, December 27, 2023

And That’s No Moon Either

Charlie Hunnam (left), Michiel Huisman, and Sofia Boutella in Rebel Moon

The Netflix movie Rebel Moon Part One isn’t as terrible as online buzz might suggest. Let’s start with that controversial thesis. Please don’t mistake me, it isn’t timeless art: a cadre of reviewers, professional and amateur, have skewered the movie’s numerous weaknesses. Its blatant ripoff of Star Wars, for one, and its reliance on director Zack Snyder’s trademark fight choreography, intercut with abrupt breaks into silly slo-mo cartoonishness.

However, I’d like to avoid the obvious and manifold shortcomings, and spotlight one overlooked strength. Pre-release press coverage emphasized how Snyder initially pitched the screen treatment to Lucasfilm as a “more mature” take on Star Wars. He reworked his treatment as a standalone feature only after Lucasfilm’s parent company, Disney, passed. This made me cringe, because filmmakers frequently think “mature” is a synonym for “violent, hypersexual, and visually murky.”

American culture often treats children as twee and precious, incapable of handling life’s harder edges. Disney, of course, notoriously sanded all the sex, and most of the violence, off Grimms’ Fairy Tales in their feature-length animation. Children’s books, movies, and TV shows model chaste heterosexual romance, and only stylized violence, reducing war to ballet. European media does something similar, but not to the same degree.

This includes the original Star Wars. According to historian Garry Jenkins, Lucas modeled the original movie on 1930s Flash Gordon serials he watched on his parents’ black-and-white TV. His parents considered Flash Gordon acceptable viewing, because of its strong moral backbone, its clear division between heroes and villains, and no sex. (The episodes also broadcast out of sequence, which is why Lucas dubbed the first movie “Episode IV.”)

We’ve watched former child stars polish their adult bona fides by embracing sex, violence, and moral flimsiness. Countless former child stars, like Anne Hathaway or Lindsay Lohan, attempted to cleanly divide themselves from their childhood roles by appearing topless onscreen. Miley Cyrus’ live national meltdown continues to haunt her career even after she’s tried to atone. Achieving adulthood in mass-media culture means rejecting the preciousness of childhood.

Director Zack Snyder (promo photo)

Snyder attempts something similar in Rebel Moon. The movie’s protagonist, Kora (Sofia Boutella), is a battle-scarred veteran fleeing her past. She aspires to live in rural, agrarian simplicity, hiding from her former commanders, but she also rejects overtures of romance. She describes herself as too hurt to love; her words form a lament, but her tone is boastful. Her story dribbles out gradually, but basically, she enjoys being damaged.

Despite Kora’s best efforts, the war finds her. When the Imperium murders her village’s headman and leaves a garrison in the barn, Kora decides to run. But before completing her escape, she interrupts the hard-bitten local garrison attempting to sexually assault a young village maiden. (Rape, here mercifully averted, has become the go-to form of low-friction motivation for movie protagonists. It’s sloppy and low-hanging fruit, but audiences react strongly.)

Having tied her fortunes to the village, Kora accepts the responsibility for organizing the resistance. Here’s where the movie’s one redeeming quality emerges: Kora accepts help from villager Gunnar (Michiel Huisman). Gunnar teases out the backstory Kora has concealed during her self-imposed exile, and in doing so, recognizes the injured orphan girl beneath her warrior-woman façade. The script treads lightly in admitting this, but Gunnar falls in love with Kora.

Gunnar is everything Kora wants to avoid being: generous, nurturing, and committed to his people and community. The more he uncovers Kora’s deep internal scars, the more he wants to relieve them. He’s impressed by her fighting skills, but they don’t define her. Instead, he sees her with levels of nuance and complexity which she has tried to reject, and in stray quiet moments, tries to steer her toward healing.

There we find this movie’s moral heart: one character accepts the most cynical possible interpretation of events, and even revels in them, while the other wants to nurture the whole heart, scars and all. Only fleetingly does Snyder admit this openly, but it lingers tacitly beneath the entire narrative. Though the surface-level story addresses the villagers’ resistance to Empire, the deeper story describes the tension between nurturance and violence.

Please understand, this movie isn’t good. Snyder borrows liberally from fifty years of blockbusters and B-movies to create a smorgasbord of reheated tropes. Even that wouldn’t be so bad, but the movie doesn’t appear to be having much fun. If we pause these glaring objections, however, and look at the less-obvious moral themes, this movie has something going on. Hopefully Part Two will give it flesh.

Friday, December 22, 2023

Manufacturing Armageddon

This essay is a follow-up to two previous essays: The American Armageddon Factory and Another Product of the Armageddon Factory

Historian Betsy Hartmann’s book The America Syndrome identifies shared belief in imminent catastrophe as the underlying American public morality. From Puritan Christianity in the 17th Century, to Utopian social engineering in the 19th Century, to Global Warming in the 21st Century, Americans have always believed the world will end tomorrow. Corollary to this belief, Americans—or anyway a subset of us—have always believed America will survive Armageddon.

Most important for Hartmann, this impending apocalypse always has a moral implication. This is obvious in Puritan Christianity, which believes a Triune God is preparing to distribute justice, in the form of payback to unbelievers. But even in the less aggressively religious 20th and 21st Centuries, this moralistic judgement never abates. The defining apocalypses of those eras (nuclear war, Malthusian overpopulation, and global warming) always reek somehow of karmic consequences.

Viewed thusly, the movie Leave the World Behind both does, and doesn’t, continue the “America Syndrome.” It similarly presents a secular present stretched to its limits, and a population that clothes its awareness of imminent collapse in a crazy quilt of misanthropy, denialism, and on-demand entertainment. This world requires only light pressure to snap. As Mahershala Ali explains in his culminating monologue, America has enemies willing to apply that pressure.

However, the movie lacks the moral component Hartmann identifies in prior apocalyptic predictions. Some characters attempt to retroactively construct an explanation which makes the events a payback for Americanism, but this is ramshackle and unconvincing. Ultimately, as Julia Roberts and Myha’la watch New York burn from across Long Island Sound, we’re left to conclude that sometimes, things happen because they happen; justifications are flimsy, selfish, and meaningless.

Thus far, I’ve attempted to avoid spoiling the movie’s irresolute resolution, like a faithful reviewer. But the movie’s closing three minutes color how we perceive everything that’s happened before. The story’s youngest character, 13-year-old Rosie, has abandoned the main house, where adults or near-adults squabble for control and explanation. The grown-ups want meaning; Rosie has spoken Delphicly about wanting something else, which she now pursues.

We find Rosie in a neighboring mansion, gorging herself on starchy processed snack foods and fizzy water. Hearing her mother’s panicked cries outside, Rosie instead flees deeper into the house, where she discovers a fully equipped luxury fallout shelter, including a massive home entertainment system. She activates the TV and scours the DVD racks to find the Holy Grail she’s pursued throughout the movie: the final episode of the sitcom Friends.

Rosie, the movie’s youngest character and therefore the one most definedly possessing a future, instead flees into a low-friction sitcom that ended nearly twenty years ago. In case the symbolism seems too subtle for streaming audiences, Myha’la’s character Ruth previously derided Friends as “nostalgia for a time that never really existed.” Facing the world-altering consequences of… well, something, Rosie flees from meaning and buries herself in mass-media anesthesia.

This movie’s moral backbone, to the extent it possesses one, certainly deserves criticism. It suggests that, deprived of our technology and entertainment, Americans will descend into base impulses, racism, and paranoia. Rather than moral payback, as Hartmann postulates, this movie suggests Armageddon will expose our near-complete moral vacuity. There’s no karmic retribution, this movie implies, when Americans don’t hold anything holy anymore anyway.

Yet as bad as this moral lesson is, the final takeaway, delivered by Rosie, feels worse. Given a blank slate to look forward and reinvent society on firmer moral footing, Rosie instead seeks resolution in a sitcom’s concise narrative arc. Not just any sitcom, either, but Friends. Sure, the show ended in 2005, during America’s wars in Iraq and Afghanistan; but it began in 1995, the decade colored by America’s brightly-hued Cold War hangover.

Clear back in 2009, Mark Fisher wrote that “it’s easier to imagine the end of the world than the end of capitalism.” The pre-catastrophe world this movie depicts resembles ours: fashionably pessimistic but convinced we need one long family holiday to restore our morality. Yet we, like this movie, face an historical inflection point: some socioeconomic change must happen soon, or everything will break without a safety net.

But instead of moralistic challenge and opportunity, the defining traits of past apocalypses, this movie shrugs and retreats into nihilism. With no vision of the New Jerusalem, or something compatible, the creative team simply can’t imagine another future. Therefore, the narrative threads they introduce don’t deserve resolution. Everything is ultimately meaningless, and ordinary humans are too vacuous to deserve rescue.

Fuck it, let’s go watch TV.

Tuesday, December 19, 2023

Another Product of the Armageddon Factory

From left: Mahershala Ali, Myha’la, Jula Roberts, and Ethan Hawk

Netflix’s apocalyptic thriller Leave the World Behind is subdivided into six roughly equal parts by interstitial title cards, like TV episode titles. While many creators behind streaming “television” have tried to position their series as multi-episode movies, this film feels like a compressed treatment for a later TV serial. This comparison becomes extremely pointed by the indecisive cliffhanger ending which resolves nothing, as though punting to the next season.

Overworked preppies Amanda and Clay Sandford (Julia Roberts and Ethan Hawke), burned out in Manhattan, spontaneously book a Long Island holiday cottage. They’re somewhat distressed to find unreliable internet and cell service, but hey, it’s an adventure. Except, on the first night, G.H. Scott and his daughter Ruth (Mahershala Ali and Myha’la) arrive, claiming to own the cottage. New York’s under blackout, they explain; can they borrow their house back?

This movie has few speaking characters. Besides the Sandfords and Scotts, we see the Sandfords’ teenagers, Archie and Rosie (Charlie Evans and Farrah Mackenzie). Archie’s defining characteristic is he checks out girls; Rosie obsesses over 1990s pop culture, especially the sitcom Friends. Very late, Kevin Bacon appears as a survivalist neighbor; besides a brief appearance by a Spanish-speaking hitchhiker, Bacon is the only evidence that working-class Long Islanders exist.

Writer-director Sam Esmail bases this movie on Rumaan Alam’s novel. Therefore I wonder who exactly, Esmail or Alam, dropped the ball so badly. This movie reads like a masterclass in how to alienate your audience. Though different plotlines frustrate in different ways, we could summarize the magnitude of disappointment thus: the creative team introduces interesting questions, then ignores them. They expect us, the audience, to do the heavy lifting.

First, we never know exactly what’s happening. Apart from isolated flashes of information dropped without context, all we know is that we know nothing. The Sandfords and Scotts are trapped inside the house, reliant on outdated information and a noncommunicative government. Every attempt to leave the house ends in one catastrophe or another. It’s impossible to read this separate from the COVID-19 lockdowns that ended shortly before principal photography began.

Both the Sandfords and the Scotts are relatively well-off. Throughout the movie, their cottage never loses electricity or running water. Their supplies of coffee and alcohol remain limitless. Yet living in proximity brings out prominent tensions, fueled substantially because the Sandfords are White, and the somewhat richer Scotts are Black. Amanda even engages in frustrated, self-pitying monologues that reveal her poorly sublimated racism.

Oh, and the monologues! This movie consists primarily of conversations, but they aren’t really conversations. Every character has a thesis statement, and apart from the occasional bread-n-butter dialog to move the story along, the characters mainly discourse at one another. This is a Very Important Message Movie, and the characters remind us of that constantly. They don’t even interrupt the action to discourse; they interrupt discourse with occasional action.

As we approach the movie’s culmination, both Amanda and G.H. offer up monologues that, in another movie, might’ve come before the climactic confrontation. Except there’s no climactic confrontation. We reach the moment where experienced genre authors would’ve brought the families’ braided narratives together to unlock the secrets, and… the movie stops. Rather than resolving the manifold threads it’s introduced, the movie halts. Sad trombone noises.

I already anticipate counterarguments. Life is frequently disappointing, and lacking in resolution. In the technocratic apocalypse depicted herein, most people would never receive meaningful explanations. But this isn’t real life. Novelists and screenwriters make decisions—or, in this case, don’t make decisions. I’m reminded of “deep literature” I read in college, and wrote chin-pulling considerations of moral themes, when I really wanted to ask: “Why did the author stop mid-story?”

One suspects that Esmail and/or Alam raised interesting questions, then thought their responsibility done. This often happens in self-consciously “literary” writing, which often treats resolute answers as facile. The author ends with a discordant note, sometimes mid-action (as here), and expects the audience to contemplate the unresolved questions. I suspect the creative team wants us to ask ourselves: “What’s left when our machines, entertainments, and busywork disappear?”

Instead, I ask: “What screenwriting workshop did these guys drop out of?” Esmail made his reputation on similar morally ambiguous TV series, like Mr. Robot and Homecoming. TV audiences accept unresolved themes, expecting they’ll resume next episode or next season. In feature films, it simply feels like the creative team expects the audience to finish what the writers started. That’s a disappointing, rage-inducing conclusion to a good start abandoned.

Saturday, December 16, 2023

Who, As a People, Do We Want to Be?

A Wisconsin school board removed 444 books from its middle and high school libraries this week, pending investigation of claims from an irate parent. Don’t let that detail go unremarked: claims from an irate parent, singular. That’s the point America’s public discourse has reached, where outspoken individuals with bad attitudes arbitrate books for entire communities of learners. Remember, libraries are often the only book access disadvantaged students even have.

Earlier this year, a story got traction on social media: the recent spate of mass book bannings is orchestrated by nucleus of serial accusation-mongers. The story broke in the Washington Post (behind a paywall, sadly), before getting reproduced, with germane editorial insertions, in mainly left-leaning aggregator sites like RawStory and Vox. As few as eleven serial bellyachers in a representative sample filed sixty percent of actionable complaints with school boards.

Blue Facebook and Blue Xitter highlighted which parents—or frequently, “parents”—filed these complaints, and which books they rejected. The books overwhelmingly foregrounded queer characters; most of the remainder dealt with race and bigotry in America. They’re trying to squelch free ideas, the refrain goes; and, in close harmony, history never looks well upon book banners. Comparisons inevitably arise to the Nazi regime’s wanton destruction of the Institut für Sexualwissenschaft.

Fair dues, perhaps. But the leftists making this argument are hardly free-speech absolutists themselves. They wouldn’t tolerate these same libraries stocking, say, The Turner Diaries or 120 Days of Sodom. These examples are extreme to the point of satire, obviously, but the point stands: everybody will agree that some books (and other artifacts, like Confederate statues) don’t belong in public spaces. We only dispute which books and artifacts must go.

I’ve recently had leftist colleagues derail entire discussions over somebody’s use of outdated terms. I’m not talking about when somebody centers their discussion on offensive content, or uses actual slurs like the N-word. I mean specifically examples like the fact that “handicapped” has fallen on disfavor, and advocates prefer “disabled”—a word I was taught to abjure in the 1980s, because it spotlighted what somebody lacked over what they were.

Already I anticipate conservative friends chortling over “political correctness gone amok.” But the right is hardly innocent of wanting to amend language to expunge offense. Recent attempts to turn “Boomer” and “cisgender” into cusswords have turned unintentionally hilarious. The attempt to forbid certain words or turn language into a minefield of hurt feelings only differs on one point: exactly whose feelings we believe deserve protected from offense.

In the past, I’ve described myself as a free-speech absolutist. But like most absolutists, I’m far from absolute. Certain language doesn’t deserve protection. The Supreme Court has declared that, First Amendment notwithstanding, obscenity and incitement to violence aren’t protected speech. We can agree these forms of speech don’t deserve protection because they cause material, calculable harm. And in causing harm, they cross the line from “speech” into “action.”

Beyond the harm standard, though, what other yardsticks permit us to declare certain ideas off-limits? Some will argue moral disgust. These hyper-activist parents filing hundreds of book-ban requests want certain ideas removed from protected discourse because the content gives them moral heebie-jeebies. This, again, questions whose tender sensibilities need protection from the mean, terrible world. Why these White, conservative parents and not, say, me?

The very existence of accused human trafficker Andrew Tate and confessed queer-baiter Jordan Peterson offends my morality. To say nothing of Alex Jones, who had his Xitter account restored this week after a blink-and-you-missed-it poll. To me, and notably also to Elon Musk immediately after he purchased and eviscerated Xitter, silencing Jones in whatever hog-wallow he sleeps in is the obvious moral choice. That homunculus doesn’t deserve a platform.

Please don’t mistake me, the left has plenty of squishy, reactive, and morally vacuous history. We’ve done a good job of opposing everything Republicans and their allies like, even when it means abjuring our principles: Democrats demanded the Trump Administration “follow the science” regarding COVID-19, for instance, then turned deaf when Trump-aligned bureaucrats presented robust (but not ironclad) evidence that the virus escaped a lab.

Admittedly, that example is slightly tangential. But it makes my point: both sides make decisions based on moral disgust, in a society that no longer even pretends to have shared morals. That means both sides are currently arguing about what public morals our society should have. Sadly, both sides have nothing to fall back on, besides their own morals. The arguments become circular, and our society becomes dizzy.

Wednesday, December 13, 2023

Dracula and the Problem With Modern Morality

Claes Bang as Dracula (left) and John Heffernan as
Jonathan Harker, in Moffat and Gatiss’ Dracula

Audiences who have read Bram Stoker’s 1897 novel Dracula will notice something about Steven Moffat and Mark Gatiss’ 2020 adaptation: the Count is on screen a lot. Count Dracula is absent from over three-quarters of Stoker’s original novel, a looming presence whose terror grows more ominous because he could be literally anywhere. By contrast, Moffat and Gatiss foreground the character, who remains present and amorally aggressive even when characters (and viewers) need their rest.

Moffat and Gatiss are the creative team behind Sherlock. Yes, *that* Sherlock, the one that kickstarted Benedict Cumberbatch’s career and revitalized TV mysteries. The show was a rollicking success until it overstayed its welcome by one season, and the meme-driven zeitgeist turned against it. Steven Moffat previously paid his dues in contemporizing Victorian literature with Jekyll, which postulated a high-tech corporation wanting to harvest Mr. Hyde for profit. These guys know their modernized Victoriana.

All three original properties withheld information from readers, information which today’s audiences already have. Pretending that, for instance, Henry Jekyll and Edward Hyde aren’t manifestations of the same person, would be naïve and coy nowadays. While some Massively Online Critics still complain that Sherlock suffered because the viewpoint characters withhold information from the audience, this overlooks that all three original properties did this regularly. Victorian audiences apparently loved last-minute reveals: “The killer was here the whole time squeeeee!!!

Dracula isn’t mysterious anymore, as he was in 1897—though admittedly, more audiences probably know Todd Browning’s version than Stoker’s. Therefore withholding Dracula from audience view makes little sense. Instead, Moffat and Gatiss reveal him early, showcasing his rapacity, his sexual appetite, and his lack of common morals. Instead of making the well-known vampire appear falsely mysterious, our creative team must instead convince us why everything we believe about the famous story is wrong.

Therein lies the problem. As Susannah Clements writes, Dracula represents a specific Victorian Christian morality. Van Helsing presents himself as a “man of science,” a much less precise term than we’d accept nowadays, whose scientific acumen hoovers up any stray evidence it encounters; yet in fighting Dracula, Van Helsing reverts to the language and doctrines of Christianity. The monster fears churches, crosses, and vicars. Fundamentally, he affirms that modernity cannot survive without ancient religious truths.

Dolly Wells as Agatha Van Helsing

Except, the longer I live with Clements’ thesis, the less airtight it becomes, because that Christian morality was window dressing. Victorian England required a strong state to maintain the appearance of public virtue. William Blake’s “dark Satanic mills” had become Britain’s background noise. Sherlock Holmes shot intravenous cocaine to control his moods. At this point, it’d be disingenuous to deny that Bram Stoker was probably a closeted homosexual; his schoolfriend Oscar Wilde was jailed in 1895.

Abraham Van Helsing (rendered by Moffat and Gatiss as Agatha Van Helsing, a Carmelite nun) didn’t restore foundering Victorian Christianity; he enforced a specific kind of moral vestment on a largely secularized, industrialized nation. He encouraged the cadre of men driving the story to punish Lucy Westenra, who had, in coded language, been sexually liberated enough to choose her own lovers. As punishment, the men took turns, ahem, driving their wooden stake into her.

Agatha Van Helsing, opposite her literary ancestor, has no patience for public morality. Though a nun, she’s substantially secularized; she describes her holy orders as “a loveless marriage.” Her Dracula reacts with the same vehemence as Stoker’s to crosses and other religious appurtenances, but Agatha rejects the religious explanation. She accuses Dracula of retroactively constructing a moral explanation for his abilities and weaknesses—then she does the same. For her, morality is as morality does.

The lack of underlying morality—even one the characters only observe for ceremonial purposes—is the defining difference between this Dracula and Stoker’s. Unfortunately, without such an underlying morality, the story has nothing to be about. Van Helsing speculates aimlessly about why medieval European morality still has power over Dracula, and reaches a resolution that satisfies her only in the closing minutes. For us peons watching, however, the explanation raises more questions than it answers.

Nobody could rewrite Dracula for modern audiences and retain Victorian morality. In Britain, Christianity has retreated to the second most common religious identity, after “none”; and even in America, which (unlike Europe) emerged from two world wars more religious rather than less, Christianity has fallen below two-thirds of the population. Our vampires today are Lestat and Edward Cullen, not Dracula. Yet as with Van Helsing’s religion, the premodern story keeps intruding on our modern world.

Friday, December 8, 2023

The Courage to Change a Broken World, Part 2

Hispaniola, in a map from the Encyclopedia Britannica

I first recall Haiti with clarity following the 1991 coup d’etat against Jean-Bertrand Aristide, the country’s first democratically elected president. I have vague, muddled prior recollections of Haiti in the news, particularly surrounding “Baby Doc” Duvalier’s expulsion; but I was too young to understand events in context. I was only eleven when Baby Doc fell, and remember wondering how bad somebody with such a cutesy-poo nickname could really be.

Finally old enough to understand the country’s historical context in 1991, I read Haitian history and culture avidly—to the extent I could. My local public library had exactly one book on Haiti, Wade Davis’ The Serpent and the Rainbow. (I’d watch Wes Craven’s feature film adaptation years later; the movie reduces Davis’ immersive anthropology to lurid exoticism.) I later discovered Paul Farmer when I started college. Sources were scarce.

With no scope to understand Haiti’s internal forces, I struggled to encompass much about Haiti. Despite being the Western Hemisphere’s second-oldest nation, it hadn’t enjoyed the United States’ economic prosperity or internal stability. Though the nation was functionally independent by 1804, it couldn’t organize a nationwide election until 1990. Both countries had fertile soil, luxurious coastlines, abundant natural resources, and robust populations. But only one got rich.

Wade Davis

Back then, I was more conservative, and believe me, I could’ve easily explained Haitian poverty dismissively. America’s founding (White) leaders were moneyed, aristocratic, and essentially bastions of old-world privilege. Haiti’s founders, former slaves all, were substantially illiterate and uniformly Black. Though it would’ve been impolitic to say so aloud in 1991, one underlying thread of American political discourse happily blamed the Caribbean Blacks for their own plight.

I couldn’t stomach such racism, then or now, despite Republican leanings. Accepting an entire nation’s inherent moral deficiency felt slovenly, until I’d exhausted other likely explanations. I quickly learned that French colonial masters made themselves obscenely wealthy through slave plantations and coerced labor. Revolutionary leaders chased France out in an overthrow that descended into a pogrom… and then, having no other model, cracked down on the citizenry.

America’s revolutionary leaders claimed to commence a new nation, a new democratic experiment, then simply repeated English social structure. Aristocracy, peonage, and slavery were the hallmarks of Early America, and in some places, still are. Haitians likewise repeated the colonial administration, because it’s what they knew. This meant bloody reprisals for minor acts of willfulness and independence: public maiming, a legacy of French slavery, is still a Haitian political tool.

This became an important seed in my political awakening. Poverty, I realized, appears causeless and simply natural, because the social forces which cause it are so baked into our environment that we can’t see them anymore. The bloody reprisals enacted by the Tonton Macoute or by Raoul Cédras’ army always first hit those Haitians who showed leadership, entrepreneurship, or self-reliance. Haiti’s ruling elite paid handsomely to keep the poor impoverished.

Dr. Paul Farmer

It took years before I applied this heuristic to American politics. Eventually I realized that some of America’s most lush, resource-rich states were the most economically impoverished, for exactly this reason. Chronically poor states, such as those in the Old Confederacy, have a long history of squandering their fertile lands, clear waters, and strong people, to keep ensure that Blacks, Native Americans, immigrants, and other “outsiders” don’t get any advantage.

We don’t call it that anymore, certainly, Since the middle 1960s, politicians can’t simply admit they’re engaged in Bull Connor-style naked racism designed to keep the designated underclass from rising. Yet that’s what happens with “law-n-order” crackdowns, or complaining about “handouts,” or making sure nobody receives a reward they didn’t “earn.” Entire American regions will squander abundant natural wealth to ensure the poor remain poor.

Read that way, Haiti’s struggles become instantly comprehensible. Sure, America had certain early advantages, particularly an existing moneyed class, and diplomatic recognition from the European powers, that Haiti lacked. (France didn’t recognize Haitian independence until 1826, the U.S. until 1861.) But both countries shared one important characteristic: the wealth of the nation wasn’t distributed equally, and autocrats paid heavily to ensure that inequality would survive.

The poorest citizens always paid the highest price, but their allies came a close second. Paul Farmer avoided political activities, simply providing peasants with medical care; yet treating the needy was so abhorrent that the Cédras junta expelled him for three years. The ruling elites would rather let their own citizens die than see them overcome their poverty. Viewed that way, I seriously doubt America has one penny over Haiti.

Wednesday, December 6, 2023

The Courage to Change a Broken World

1001 Books To Read Before Your Kindle Battery Dies, Part 115
Tracy Kidder, Mountains Beyond Mountains: The Quest of Dr. Paul Farmer, a Man Who Would Cure the World

Paul Farmer, Harvard-trained MD, had Haitian friends in his youth, so traveling to Haiti seemed the natural choice. When he arrived, he discovered a kind and magnitude of poverty that he couldn’t believe still existed within the United States’ sphere of influence. Moved by the plight of the suffering, he made Haiti his life’s work, teaching and treating patients part of the year at Harvard, so he could dedicate most of the year to Haiti.

Journalist Tracy Kidder discovered Farmer’s medical mission while reporting on the American invasion of Haiti in 1994, an invasion undertaken for supposedly benevolent purposes, to restore Haiti’s elected government. Kidder encountered Farmer because Farmer didn’t hesitate to name the discrepancies between the American mission, and what Americans actually did. Farmer’s confrontational style forced American diplomats to examine their choices. It also forced Kidder to question his own motivations as a journalist.

The Paul Farmer whom Kidder describes grew up relatively poor and rootless in the American Southeast. Trailer park denizens were “his people.” But he was also a hard worker, a speedy student, and an amiable, gregarious personality. He made friends and connections with an ease that might make others jealous. He maintained friendships with wealthy patrons and university peers, but also learned to speak Haitian Kreyol fluently, and won trust among the country’s chronically exploited peasantry.

This isn’t so much a biography, as a Boswell-like immersion in the subject’s life. Kidder follows Farmer to his free clinic in Haiti’s Central Plateau, among the poorest places in the Western Hemisphere. Farmer and Kidder walk along unpaved switchback roads through steep valleys, delivering medications to agrarian peasants who consider Farmer a literal wizard. Despite being younger than Farmer, Kidder often struggles to maintain Farmer’s breakneck pace and athleticism.

Although Haiti was the Western Hemisphere’s second nation to overthrow European colonialism, it never achieved the prosperity or stability of its older cousin, the United States. American slaveholders couldn’t abide a nearby neighbor populated by rebellious slaves, and never recognized Haiti until after its Civil War. Throughout the Twentieth Century, Haiti served as the proxy battlefield between America and France for hegemony over the nominally democratic world, a battle that propped up anti-democratic strongmen like the Duvaliers.

Dr. Paul Farmer

Farmer, moved by the peasants’ plight and the social mores of his Roman Catholic upbringing, saw bringing medicine to Haiti as his personal mission. But by “personal,” he didn’t mean “individual.” Kidder herein profiles several prominent allies, including Ophelia Dahl (daughter of Roald) and Jim Kim who participated in his mission. Farmer’s organization, Partners in Health, managed to parley several bouts of good luck into a longstanding project. These elements included wealthy benefactors, and Farmer’s own medical celebrity.

Two widespread diseases in Haiti, which dominated Farmer’s early career, were drug-resistant tuberculosis and AIDS. Though neither disease originated in Haiti, mean-spirited Northern PR spun those diseases together with Haitian identity in the world’s imagination. This meant that Partners in Health, backed with a comfortable but not large endowment, became a global leader in TB and AIDS treatment. Farmer’s mission suddenly went from regional and personal, to global. Farmer got dragged along with it.

Kidder was present to watch much of this sudden growth. He set out to write about Farmer’s work in Haiti, but during the writing, he accompanies Farmer on trips to Peru, France, Russia, and elsewhere. Kidder chronicles Farmer’s transition from a simple Haitian country doctor—something Kidder quotes Farmer saying he only wanted to be—to a transnational medical diplomat, visiting medical conferences in Europe and North America, and treating TB patients in Latin American slums and Russian prisons.

Throughout, Farmer is driven by his personal Christianity. Though Farmer eschews doctrine, and shows impatience with formal theology, he heeds the Gospel’s message of feeding and comforting “the least of these.” Kidder repeatedly Farmer citing a “preferential option for the poor,” a term from Liberation theologian Gustavo Guitierrez. Among Farmer’s allies is a former Salesian priest, Jean-Bertrand Aristide. Farmer embodies the self-effacing service described in the Gospels, and too often ignored by many First-World Christians.

This book shipped in 2004, as Farmer’s celebrity reached new heights, and describes a man with benevolent goals still before him. Three years later, Farmer died unexpectedly, on an outreach visit to Rwanda, his desire to retire to his inland clinic unfulfilled. This biography testifies to the good which First-World citizens can achieve, if we channel our privilege toward those in need. When we use our advantages to serve others, the world changes around us.

Monday, December 4, 2023

Jane Somewheyre

Sharon Lynn Fisher, Salt & Broom

A grim and gruesome spectre haunts the corridors of Northern England’s isolated Thornfield Hall. Edward Rochester, lord of the manor, doesn’t believe in ghosts, but his people do. So he requests the Lowood School, famous for training witches and other spellcasters, send someone to exorcise his home. With some reluctance, the school sends Jane Aire [sic], who is an expert herbologist but has little experience with the outside world.

Veteran classic literature readers will recognize the broad outline of events described. Sharon Lynn Fisher has published several volumes combining science fiction or fantasy with romance; for this volume, she’s chosen to put a fantastic spin on one of the foundational texts from which romance writers frequently draw. Charlote Brontë’s Jane Eyre has captivated audiences for 175 years with its combination of weight and whimsy, and its deep dive into the human psyche.

Fisher’s adaptation omits the portions of Jane’s “autobiography” centered on the Reed and Rivers families, and reduces Lowood School to background. She cares more about Jane’s interactions with Rochester, with the assorted denizens of Thornfield Hall, and with the first Mrs. Rochester—whose story here is aggressively altered. The emphasis lies heaviest on the romance aspects, as the fantastic components are very loosely draw.

Thornfield Hall is the epitome of Gothic intricacy: miles of sprawling corridors and countless disused rooms. Jane finds a household staff plagued by phantoms they can’t easily explain, but which they also haven’t really seen. Unlike in Brontë’s original novel, everyone here is completely forthcoming about the first Mrs. Rochester, and her tragic end. They’re more circumspect about what came before her demise, or why they can’t release her memory.

Rochester himself resembles the original, a private man, cautious in displaying his feelings. His staff of hundreds depend on his wealth, and high-class bachelorettes pursue him, but he has no real friends. Jane struggles to understand: is he still mourning his wife? Did he ever truly love her? Is he even capable of love? Because the longer Jane interacts with Rochester, the more she experiences feeling she’s never known before.

Sharon Lynn Fisher

Before continuing, let me acknowledge: I’m not this book’s target audience. Fisher writes for audiences who consume classic semi-romance literature, like the Brontë sisters or Jane Austen, as casually as paperback novels. Her low-friction storytelling elides the coded language common in 19th Century literature, and the almost Monty Python-esque humor beneath the sometimes starchy surface. This book feels like Charlotte Brontë retold as beach reading.

Jane practices a lite-beer form of Wicca, one broadly compatible with staid post-Georgian British Christianity (the story includes references to religion, but only in passing). Her witchcraft consists mostly of making talismans, in the broadest sense, and speaking rhymes. Though the magical system matters greatly to Jane, and drives parts of the story, Fisher doesn’t much explicate it. One suspects she cares little for mechanical details.

The romance feels similarly hasty. Jane says she’s approaching thirty, and has spent nearly her entire life at Lowood School. She’s had few interactions with men who aren’t either stern patrician figures or earthy tradesmen. Her sojourn at Thornfield, an unspecified span of months in Brontë’s telling, is herein reduced to days. Yet she feels so passionately for Rochester that she disobeys instructions, exceeds her training, and trusts Rochester completely.

Fisher implies a complex, heady, multisensory world behind her story. Jane, our narrator, expounds on the complexity of Thornfield’s grounds, its multiple gardens for food and herbology, its charming Gothic ruins. She implies the existence of bacchanalian Bonfire Night celebrations, and a world of passions kept secret under decorous Regency-era public morality. Yet all this never quite goes anywhere. Jane introduces evocative themes, but carries few of them forward.

Please don’t misunderstand: this isn’t a bad book. Fisher retells Brontë’s story without the stiff outdated language, the lengthy digressions, or the sometimes moralistic tone. Her addition of witchcraft makes explicit several themes beneath the surface of Brontë’s original. And not everyone shares my belief that characters should face difficulties in achieving their intended ends. Again, this book isn’t bad; it just wasn’t written for me.

Perhaps I set my expectations too high. In combining classic literature with fantasy, I expected Fisher to create something closer to the Lord of the Rings, something dense with adventure and the complexity of human experience. Fisher instead offers something classic literature fans can snuggle under, comforted by the familiarity, and read without having to parse the historical context. It isn’t for me, but some people like that.

Monday, November 27, 2023

King Charles and What It Means to Own Anything

Charles III and his heirs
(official portrait)

Late last week, news broke that King Charles III seizes unclaimed property in common citizens’ estates and steers that property to his own portfolio. This doesn’t happen to every British citizen, only those resident in the historical Duchy of Lancaster, which remains part of the British monarch’s purview. Nor is the process inevitable, as one can avoid it by leaving a will or having a clear chain of inheritance.

Nevertheless, the story shocked people, who thought the idea of royal privilege over citizen property, died with the passing of medieval feudalism. For me, this raises questions about the very concept of ownership, and what states and governments are for. Modern capitalist economics requires citizens to own their property clearly, particularly “capital” property with floating value, like land. But without laws, and therefore without government, ownership is completely unenforceable.

It's easy to claim ownership of property we can carry. The currency in my wallet is clearly mine because it’s in my wallet. But we need laws enforcing when that ownership ends—when I trade that currency for goods—and when that ownership remains uninterrupted—when a mugger steals my wallet. Just because I don’t currently possess property on my person doesn’t mean I’ve relinquished ownership.

Therefore we feel safe leaving valuables, like jewelry, at home. We have portable ownership, legally termed “title,” over property we can’t carry. But that principle of title only makes sense when laws enforce it. Despite the libertarian myth that everyone would flourish if we simply rescinded most laws and regulations, we acutely depend on laws to own anything. Especially for noncorporeal property, like stock portfolios or NFTs, law makes ownership possible.

When governments rely upon a personal monarch, whose inheritance descends from Alfred the Great swinging his pig-iron sword in the middle 800s, these questions become more pointed. Yet these questions aren’t uniquely British. Even in America, where political authority doesn’t descend from any individual, we still pay property taxes and licensing fees to preserve our claim on private property. We still pay the state to own property we already bought.

The Duchy of Lancaster story reveals one dark implication of title ownership. If law makes ownership possible, then ultimately the law owns things, and simply licenses ownership to citizens. This reflects something I’ve mentioned before: you ultimately don’t own real estate. The “real” in real estate doesn’t mean literal or existing; it means royal. Real estate is the king’s permission to control or occupy property. Such permission can be rescinded.

The last photo of Elizabeth, taken
at Balmoral, days before her passing

This should go without saying, but this dual strand of ownership creates moral contradictions. Capitalist economics relies upon concepts of ownership as absolute. When somebody like Elon Musk owns corporations, stock portfolios, and land, that ownership must be inviolable. When Musk leverages his private investments as capital against, say, buying up massive social media networks, that capital must be his, in order to own the risk.

Yet clearly the property isn’t entirely his, if he and others rely upon state authority to enforce his property rights. Libertarian philosophers like Robert Nozick or P.J. O’Rourke note that state authority is always on some level coercive, and contains the implicit threat of billy clubs and riot police. All state power, including the power to enforce ownership claims on anything citizens can’t carry (like land), is tacitly violent.

King Charles III finds himself in an awkward position regarding property ownership. As monarch of the United Kingdom, all state power derives from his person, or anyway his rank. It bears emphasis that, in its origins, “king” isn’t a political rank, it’s a military rank. Early monarchs used violence, or threats of violence, to protect citizens against invaders and bandits; in return, they claimed sweeping rights within the kingdom.

But Charles is also a private citizen. If private property exists, then Charles Windsor-Mountbatten (legally distinct from Charles III) has as much right to utilize and profit from it as anybody. The inherent conflict of interest seems so obvious, it shouldn’t require comment. Yet evidently comment is required, since even in non-monarchical America, plutocrats like Donald Trump and Vivek Ramaswamy see no contradiction between private wealth and public authority.

We’re witnessing an unusually naked display of the failure of human authority. Property ownership requires law, but law frequently kneels to property and wealth. This incestuous cycle ensures that, no matter how superficially democratic our government is, power ultimately excludes ordinary people. The state may protect houses and jewelry for us pedestrians, but ultimately, state power and wealth conspire to protect themselves.

Friday, November 24, 2023

Lights, Camera, Inaction

1001 Movies To Watch Before Your Netflix Subscription Dies, Part 50
Andrew Niccol (writer-director), S1m0ne

Veteran movie director Victor Taransky has grown disillusioned with Hollywood: with demanding actors, interfering producers, and insatiable audiences. He got into movies to create art, but he’s become beholden to the money. Then one day, a computer programmer approaches Taransky with a priceless invention: a completely digital actress. Taransky thinks he’s found his artistic salvation. But controlling the perfect actress simply creates new problems he never anticipated.

This movie garnered lukewarm reviews and barely broke even upon release in 2002; it lacked studio support, and never found an audience until its home media release. Yet it’s received a new lease on life with recent developments, real and proposed, in computer learning heuristics. Promises which this movie made in 2002, Hollywood wants to fulfill today. It’s almost like the studios didn’t understand this movie’s parable of artistic control.

Simulation One, whom Taransky rechristens Simone, is the filmmaker’s ideal: a beautiful, graceful, and infinitely adaptable actress who makes no demands. She exists entirely as she is and follows Taransky’s directions without question. Her human costars, who have frequently grown indolent in their fame, find themselves inspired to resume improving themselves. Studio executives count their receipts. Nobody ever questions why they’ve never met Simone, who gets inserted in postproduction.

Al Pacino plays Victor Taransky much like he played Sonny Wortzik in Dog Day Afternoon, as a frazzled wreck whose failures push him to take extreme measures. Like Wortzik, Taransky doesn’t know how to control the monster he’s created. (For purely plot reasons, the programmer who wrote Simone’s code is excluded from the story.) He simply wants to finish his latest big-screen extravaganza after his designated star abandons the set.

Canadian model Rachel Roberts plays Simone as an Anglo-American icon of fair-skinned beauty. Roberts had done some advertising campaigns, but had no prior acting credits, making her, like Simone, a complete cypher. To enhance the illusion, the theatrical release didn’t include Roberts’ name; she wasn’t added to the credits until the home media release. Perhaps learning from this movie’s message, Roberts chose to avoid stardom, pursuing only occasional guest roles.

Rachel Roberts in her only starring role, as the title character in Andrew Niccol's S1m0ne

Simone salvages not only Taransky’s picture, but his foundering career. Audiences, costars, and studio execs love her. Taransky struggles to handle the sudden demand for his newest discovery, whom he cannot admit is phony. Managing Simone’s career quickly becomes his full-time job, one that keeps him away from the family whom he already barely knows. Taransky invented Simone to control her, but before long, she controls him.

Everyone seemingly loves Simone. But the longer we watch, the clearer it becomes that nobody really loves Simone; they imbue her with their favorite virtues, and idolize the myth they’ve created. The movie includes a post-credits scene, a relative rarity pre-MCU, encapsulating this perfectly: a moon-eyed fan watches rigged footage of Simone and locks onto one insignificant detail. From that, he deduces they’re star-crossed, if only he could meet her.

Again, Taransky initially loves Simone because she makes no demands whatsoever. Contrast this with his snippy studio-chosen star, played by Winona Ryder, whose ever-shifting demands become costlier than his actual shooting budget. But the fewer demands Simone makes, the more demands Taransky starts receiving from other stakeholders. Everyone wants something from her: money, art, public morals. Taransky, the only one who knows how to operate her program, has to deliver.

These aren’t fiddling issues. The exact reasons Victor Taransky initially loves Simone are the exact reasons the AMPTP recently threatened to replace background extras with scanned images. Hollywood wants compliant actors who don’t expect to be paid, respected, or kept safe. Lucasfilm, a Disney subsidiary, owns James Earl Jones’ voice, ensuring he’ll continue performing Darth Vader, for free, long after he’s laid in clay.

The whole point of Simone is that the Hollywood mogul thinks he’ll control her; the whole lesson is that he’s wrong. The traits of compliance and adaptability which Taransky loves, increase the demands laid upon him. His attempts disavow Simone only create new problems, as not only do studio execs resent the lost revenue, but audiences resent the lost icon who saw their own supposed virtues in her.

Writer-director Andrew Niccol’s previous filmography includes Gattaca and The Truman Show, movies about the futility of chasing perfection and control. This is Niccol’s first attempt at comedy, which perhaps threw reviewers, who didn’t always grasp his dry, understated style. Though Niccol offers only occasional laugh-out-loud moments, his deft irony underscores the absurdity of his situation. And it presciently foreshadows the path Hollywood has taken since.

Tuesday, November 21, 2023

Black Afterlives Matter, Part II

Cadwell Turnbull, We Are the Crisis: a Novel

This review follows the book reviewed in Black Afterlives Matter

Two years after werewolves, vampires, and shapeshifters revealed themselves on the streets of Boston, some “monsters” are settling into healthy lives. Others, not so much. A faction of monsters have profited handsomely from their adversarial relationship with humans, and aren’t willing to relinquish their advantage. And some humans resent the changes they didn’t ask for, forming anti-monster vigilante groups in response. Something must give; the only question is what.

Volume Two of Cadwell Turnbull’s Convergence Saga drops two years after the first, which is somewhat awkward, since Turnbull provides few refreshers for veteran readers. I remember liking the first volume, with its blend of literary and genre conventions, its character-driven story structure, and its experimental use of a narrative voice that has come unstuck from the story. But I don’t remember his cast of thousands or their intricate relationships.

Ridley, Laina, and Rebecca have lost their werewolf pack, and someone doesn’t want them to find it. They try investigating the disappearances, and realize they can’t do it alone. So, against the advice of fellow “monsters,” they attempt to organize the monster movement and create a sense of solidarity. Unfortunately, as disfranchised peoples have always discovered, you can’t organize without drawing attention to yourself; the Black Hand starts hunting them.

Teenage Dragon enjoys the freedom he’s encountered since escaping a private collector’s perverted zoo. But the trade-off to freedom is remaining incognito, concealing the fire-breathing force of nature he truly is. The slightest slip means his human allies pay the price—as he learns when his comes home to find his adoptive human parents murdered. His friends scramble to compensate, but Dragon still lives with a target on his back.

Sondra has left public service to protect her secret shapeshifter identity. She attempts to live as a soft-spoken community organizer in the U.S. Virgin Islands, a remote American outpost that offers the opportunity to experiment with revisionist economic models. (Models which Sondra explains volubly.) But she can’t outrun her family’s history as embodiments of the islands’ primordial elements, and someone seems eager to expose her secrets in public.

Cadwell Turnbull

As these sprawling synopses imply, Turnbull doesn’t really write one novel. Basically, he’s written four intersecting novellas around the same theme. As the Convergence Saga title indicates the stories converge toward a unified climax, but for most of the book, Turnbull’s characters occupy their own worlds, with their own conflicts; sometimes, their stories seem to contradict one another. The resolution of that apparent contradiction is part of the payoff.

Consistent with the previous volume, Turnbull doesn’t blush to spotlight his story’s parallels with real-world issues. The previous novel dealt with the collisions between majority-led police power and minority populations. This novel carries these same stories, but not with the same torch-wielding vigor. Turnbull still deals with racial issues, but not necessarily directly; he in fact takes great pains to avoid mentioning his characters’ race, unless they mention it themselves.

Instead, Turnbull mainly inveighs against economic injustice. He repeats the words “cooperative” and “solidarity” heavily, alongside other revolutionary economic buzzwords. One of Turnbull’s protagonists, Sondra, has left public service to organize Mondragon-style worker cooperatives. His other protagonists organize against hatred under the cover of economic solidarity, while his antagonists disguise their bigotry behind claims of economic grievance.

This does require some level of patience. Much as I enjoy Turnbull’s story overall, it nevertheless sometimes feels like he’s lecturing his readers, in passages that expound his themes but don’t advance his story. This volume is fairly average length for a mass-market genre novel in the current market, but probably could’ve been fifty pages shorter without the economic theorizing. Even though it’s a theory I personally find admirable.

That said, Turnbull writes about the forces that turn ordinary people into “monsters” and chronic outsiders, and economics is one of those forces. It’s unlikely he could entirely excise the theorizing without short-changing his themes. Turnbull wants you to think, not only about what happens to these characters, but about why it happens, what forces outside individual control hastened this conflict, even before these characters fell backward into it.

Hovering over everything is the narrator, an enigmatic figure whose relationship to Cadwell Turnbull is, let’s say, vexed. Like the characters, the narrator only wants answers. Unlike the characters, the narrator has become unhitched from the story, and understands himself as a narrator. This forces him to reckon with why, if he’s telling the story, he can’t see where it’s headed. That question remains unresolved, postponed until Volume Three.

Friday, November 17, 2023

Meg Myers Speaks a Cold and Distant Truth

Meg Myers, TZIA

I needed longer than usual to embrace Meg Myers’ third LP-length album, not because of the music, but because of her amended image. Her previous albums foregrounded her beauty, but in ways that subverted White Euro-American standards. Her redesign into a strange, Star Trek-like dominatrix, seemed too abrupt. Then somebody reminded me of David Bowie’s Diamond Dogs album, with its body horror-influenced art, and I finally glimpsed Myers’ intent.

Like Bowie, Myers has apparently decided to periodically reinvent herself to ensure that she, and her audience, never become complacent. This new image accompanies Myers’ rejection of the “Big Sad” character she’s previously played. This album contains several songs explicitly declaring how she’s no longer beholden to the demons from her past. Which is personally empowering, sure; but as art, this album feels more like a TED Talk than music.

Several tracks have lyrics so declarative, I can only call them thesis statements. Lines like “I know the truth is inside of me, I hold the key” (from “A New Society”) or “A call for all the people, Who stand for what is right, From different places, We all unite” (from “Sophia <144>”) bespeak the energy Myers wants to convey. She’s no longer content describing her pains from a personal, introspective angle. She’d rather unify listeners in rebellion against the conditions that made those pains possible.

This puts me, the listener, in an awkward position. I respect the hippie-esque protest anthem motivation. Pop music has a long history of demanding the world do better, that it show more respect to those most abused by our culture and economy. Many of these songs, written in a very square 4/4 time, are perfect for marching on public squares and national monuments. Myers clearly wants to create a pop-art manifesto for a post-Me-Too world.

Yet something feels missing. Most tracks have a synth-driven background with a programmed percussion track—the personnel list names a human drummer on only two songs. This results in hypnotic, looping rhythms on most songs, like a heavier ‘Hearts of Space” trance. Looking back on classic protest songs, like “Peace Train” or “Fortunate Son,” these songs shared an important quality: audiences could sing along. That’s far harder here.

Meg Myers

Myers’ thesis statements are well-grounded, mostly. She decries the ways culture moralistically controls women’s sexuality, while ironically foregrounding sex, with lines like “Victimized, I’ve been tied to bedposts” (from “Me”). She excoriates the ways women, including herself, manage men’s emotions for so long that they become deaf to their own needs, in “My Mirror.” The song “Searching For the Truth” begins with the self-explanatory lines:

Everybody’s hiding from their fears
Spinning in their cycles all alone
With a hand over one eye
Disconnected pieces of a whole

I appreciate these messages, which would arguably make good stump speeches. But since Myers tells us how to receive her songs directly in the lyrics, and we’d struggle to sing along with her trance-inducing rhythms, I struggle to understand why she wrote them as songs. She isn’t inviting us listeners on a journey, she’s lecturing to us based on her hard-won experience. Basically she’s channeling her inner indie-pop Rebecca Solnit.

As a result, this album’s most intensely felt song is probably the only one she didn’t co-write. When I saw the title “Numb” on the track listing, I assumed she’d re-recorded her own song of the same title. Nope, she’s covered Linkin Park’s icky 2003 hate-lust anthem, possibly on a dare. Her understated arrangement here serves her message, as a synth drone and Myers herself on harp create a disconnected, ethereal soundscape. The collision with the original version is palpable.

In the decade since her first EP, Myers has reinvented herself constantly. Among other things, she’s shaved her head after each album tour. She’s given conflicting reviews of her earliest recordings, sometimes claiming she was constrained and controlled, other times claiming her collaborations with Andrew Rosen and Atlantic Records brought her to technical musical maturity. Maybe that explains this album’s line: “It’s time to give yourself all of the love you’ve been missing.”

Despite what I’ve said, this album does have admirable songs. Tracks like “Bluebird” and “Waste of Confetti” stop the lecturing tone and instead invite listeners on Myers’ unique journey. But they don’t come together to create an album the way her previous two LPs did. Perhaps this is a transitional album. I’ve previously felt drawn to Meg Myers’ personal, confessional lyric style. Sadly, it feels she’s now holding us at arm’s length.