Monday, December 31, 2018

The Bird Box Paradox

Netflix’s sudden hit Bird Box opens with Sandra Bullock, an actress who has attempted (with mixed success) to stretch away from romantic comedies and action potboilers, bringing two children together and immediately scolding them. She demands absolute compliance, saying that if they don’t obey, they’ll die. Then she blindfolds them, ensuring they cannot see anything contrary to her description of the world.

I often mock sudden, overnight sensations. Media attractions which flare up suddenly tend to burn out equally quickly, like disco. Being human, though, I accepted the hype and watched anyway. Even knowing my learned resistance to such overnight successes, I found the product thought-provoking and disappointing in equal measure.

Having established its moment of post-apocalyptic, authoritarian dread, the movie flashes back to the moment society ended. The morning starts with Bullock’s Melanie going through the motions of life. She’s pregnant but unattached, an artist but massively lonely: we’ve long accepted that artists, in media, are either drunken sybarites or unwilling loners. Middle ground is impossible.

As quickly as normality is established, it’s overthrown. Seemingly ordinary people begin smashing their faces against plate glass. Pedestrians step in front of moving cars. The shift from suburban blandness to massive self-destruction happens so suddenly that we almost can’t explain it… except, importantly, that it started in Russia. This matters to our analysis later.

Like Blair Witch or Cloverfield, this film generates horror through American fears of losing control. It mixes the worst of these predecessors, which, when audiences think about them for one moment, make no sense and aren’t scary. Whether it’s Cloverfield’s chaotic, destructive streets, or Blair Witch’s man-eating woods, this movie doesn’t find an unfrightening moment it can’t pass up.

And, I’m sorry, nobody but Shirley Jackson and Lord Dunsany ever withheld their monster, and sustained horror. This story’s monster is ancillary. It matters only because it gets the ball rolling; real fear emerges from isolation and paranoia. The terrible creature that shows us something so horrifying that it creates an irresistible urge to self-destruction, maybe made sense in the abstract. But it never convinces me. Not here, or in Cloverfield or Blair Witch.

Sandra Bullock in a widely circulated promo still from Bird Box.
Most of the movie’s heart takes place inside a California McMansion. The characters can’t venture outside, can’t discover anything about the outside world, can’t see beyond their walls. If they venture outdoors, they must remain blindfolded, completely cut off from eyesight, the sense through which sighted humans receive the largest fraction of their information. They must learn, know, and discover nothing. It’s the reductio ad absurdum of suburban isolation.

One friend described Bird Box as resembling the movie A Quiet Place, except with blindness rather than silence. I disagree. In A Quiet Place, people must make no sound. They must remain unnoticed, and they must achieve that by creating no information. In Bird Box, however, they cannot remain unnoticed. Instead, they must seek to know nothing themselves. They must willfully, consciously, seek to receive no information. They must permanently know nothing more than they did at the beginning of the movie.

As a result, the characters gain, and preserve, the ignorance they had the moment they locked the front door. Sandra Bullock and John Malkovich clash because they have different reads on what’s happening, and how to respond. But they’re forbidden to investigate any further and gain new knowledge. Their perceptions, once fixed, are unchangeable, because no new information is allowed in the house.

So, follow me here: a plague of ignorance begins in Russia and overtakes America. It makes people self-destructive, and fixes our prejudices immovably. It flourishes by selling our own fears back to us. It makes people willing to kill one another over the willingness to let anybody else into the compound. Watching Bullock’s character arc, people apparently become increasingly intolerant of attempts to fix the problem. And the plague of ignorance is incurable, because the information that could combat it is, by nature, tainted, and therefore by definition untrustworthy.

Have I missed anything?

Whatever horror this movie generates doesn’t come from the monster. It comes from understanding that this story presents a microcosm of American life right now. I like the idea. I believe the best literature strikes a chord because, on some fundamental level, it’s about us, the audience.

Yet, as M. Night Shyamalan keeps proving, keeping the audience ignorant of the monster seldom heightens horror; it generally leaves viewers restive. This movie has the potential to be about so much more. But that’s a potential it never particularly achieves.

Wednesday, December 26, 2018

Deep Space and the African Frontier

1001 Books To Read Before Your Kindle Battery Dies, Part 95
Nnedi Okorafor, Binti


On the cusp of young adulthood, Binti, a maiden of the Himba people of Namibia, receives word she’s been accepted to Oomza University, the galaxy’s most prestigious institution of higher learning. But attending means leaving her homeland, something no Himba has ever attempted. Still, Binti accepts her family’s scorn, because her gift for mathematics offers so much potential. Only when she’s off Earth, however, does she discover how violent and fraught the galaxy really is.

Attempting to summarize this novella feels slightly disingenuous, because author Okorafor deliberately subverts our expectations. Early pages (this book is so short, it doesn’t even have chapter divisions) channel familiar science fiction stereotypes, from Isaac Asimov and Orson Scott Card to Starfleet Academy. The “young person venturing to discover herself among the stars” boilerplate is so familiar, even veteran genre authors struggle to to prevent it sounding overused and tired.

Then, somewhere around page 20, Okorafor completely reverses everything. Exactly what happens is too important to reveal. And though it happens very early, in a book running under 100 pages, that’s a pretty significant investment. Okorafor deliberately leads audiences into thinking they’re reading a comfy sci-fi boilerplate, then upends what we’ve been coached to expect. Suddenly Binti is alone, scared, and certain she’s going to die in the silence between the stars.

This encapsulates the theme, not only of this novella, but of much of Okorafor’s work. Caught in the push-pull between tradition and modernity, Okorafor’s heroines often struggle to mark a path that embraces the one, without excluding the other. Can a gifted but provincial girl from a settled people really join an integrated, multicultural galaxy? Must she choose between her people’s old-fashioned ways, and the larger society’s demands for assimilation? I’ll answer beforehand: not easily.

Nnedi Okorafor
Binti feels her people’s call. She wants what her family wants, including useful work, marriage, and a family among her kinfolk and ancestors. But she believes she can’t achieve her true potential living in her father’s shadow, making pocket technology for the village. (Oops, the people’s purity is already compromised, but in a way they can conveniently rationalize.) With the bravado only teenagers can muster, she flees her homeland after nightfall, running away to join the stars.

If only everything were as morally distinct as it appears from ground level. Binti soon discovers Oomza University’s scientists have a history of taking things that don’t belong to them. Their justification is purely humane and reasonable: we need to know and understand the various species of the galaxy. But the Meduse, a poor but dauntless species, doesn’t forget when they’ve had something sacred stolen from them. Soon Binti is the only one who can broker a peaceful solution.

The Himba are a real people, resident in northern Namibia. (Daughter of Nigerian immigrants, Okorafor often uses African peoples in her fiction.) Okorafor uses their famous rituals, adapted appropriately to reflect her technologically altered future, to express a people who have retained their identity amid the galaxy’s push for mechanical sameness. Binti’s willingness to retain some of her people’s rituals, while judiciously discarding others, becomes the trait that saves her from someone else’s war.

Okorafor calls her style “magic futurism.” Many of her books incorporate aspects of folk magic and traditional beliefs. However, she concedes, the constant march of technology, and the cultural and environmental changes it forces, mean peoples aren’t eternal. Societies change under pressure; even Binti’s very traditional father makes his living building “astrolabes,” similar to tablet computers, but much more powerful and complex. His people’s rituals make him a technological adept, a sort of futurist wizard.

Okorafor and her publisher originally intended this novella to be freestanding, separate from her other books. She mostly writes for the lucrative Young Adult market, though she’s written a handful of novels for adults, too. But this novella’s success (Nebula and Hugo awards, plus multiple further nominations) led her to create a trilogy. The second and third volumes are full-length novels, expanding themes introduced in the first novella. They’re intended to be read in sequence.

Dedicated readers could polish off this novella in under one afternoon. Despite the density of Okorafor’s internationalist themes, it doesn’t demant slow, ponderous reading. It has enough science fantasy gee-whiz to propel audiences along briskly, never bogging down in minutiae. But it also has sufficient threads driving it, of modernity versus tradition, of internationalism versus identity, of technology versus family to satisfy intellectually demanding readers. It’s complex, yet quick. And it lingers long after you set it down.

Monday, December 24, 2018

Let's End the Unending War

Donald Trump
Here’s something I bet you never thought you’d hear me say:

I support Donald Trump’s decision to draw down American troop commitments in Syria and Afghanistan.

The President’s decision to reduce, and potentially even eliminate, America’s troop commitment, has come under intense scrutiny. Policy wonks and Pentagon specialists have criticized it. This decision possibly precipitated James Mattis’s decision to resign as Secretary of Defense. The punditocracy has made bank publishing solemn “think pieces” about this decision, the reasoning that went into it, and why it’s an ostensibly bad idea.

Admittedly, this action probably reflects Trump’s well-known strategy of impulsive decisions driven by cable news commentary. I doubt altogether whether Trump consulted his top generals, defense specialists, or diplomats before making this choice. It will now be incumbent on somebody deep within the bowels of the West Wing to craft the actual withdrawal strategy that Trump probably selected while sitting up in bed.

That being said, it’s the right choice.

America has maintained troop commitments in the Persian Gulf and the Levant for decades. After the massive troop commitment of Operation Desert Storm, America built hardened bases in Saudi Arabia, a decision so abhorrent to many Muslims that Osama bin Laden cited it, in English, on the front page of the al-Qaeda website. We’ve been shooting at, or actively preparing to shoot at, somebody in that region for nearly thirty years now.

An entire generation of young Americans will soon be old enough to enlist for the first time, and they’ve never known a world where America wasn’t engaged in a shooting conflict with somebody in that region. Active but undeclared war in the Middle East has become, for them, an American birthright. Whether it’s al-Qaeda, Iraq, ISIS, Hamas, or the Assad regime, they simply enter the military secure in the knowledge that they’ll have a deployment over there, shooting at somebody conveniently non-English-speaking and brown.

Barack Obama
And what have we gained for this infinite commitment? We haven’t secured peace between Israel and Palestine. We haven’t prevented Iraqi pogroms against Jews, Christians, and Sunnis; we maybe even hastened them. We haven’t even secured that sweet, sweet Iraqi petroleum that had Halliburton drooling on the evening news. I struggle to imagine we’ve gained anything.

In short, we’ve committed American troops, and their supporting resources, to a war with little purpose, no reward, and no visible exit strategy. Our commitment in Afghanistan has exceeded two of our longest military commitments ever: Vietnam and the Second Seminole War. In terms of duration, strategy, and sloppiness, this war is now comparable to only one prior conflict: our long guerilla war against the Apache Nation.

Donald Trump campaigned, partly, on a pledge to draw down America’s overseas military commitments. He made strident arguments against President George W. Bush’s actions in Iraq and Afghanistan. But his statements also attracted support, sometimes grudging, from Democrats weary of President Obama’s protracted drone war campaigns. Obama didn’t end America’s overseas wars, he just moved many of them off the DoD’s books, and onto the intelligence community, with disastrous international PR consequences.

Thus, the President’s decision to draw down Syrian and Afghan commitments may have been, as some pundits suggest, a cynical ploy to finish a delayed campaign promise, at a time when his political capital is running low. But y’know what? I can live with that. Because we’ve spent nearly half my life putting American soldiers on the firing line; converting American wealth into war materiel to destroy things instead of roads, schools, and research facilities to create; and squandering America’s moral standing in the larger world.

Please don’t mistake me. This doesn’t make me a Trumpista. He’s anchored his entire political career on naked racist appeals, squandered our opportunities to address global warming, and made us into an international laughingstock. It may take years, even generations, to undo the damage Trump has inflicted on America’s global standing. I’m not his ally now.

George W. Bush
But I’d rather encourage him when he does something right, than merely snipe when he does something wrong. That’s how the marketplace of ideas works, despite what Internet discourse has implied. Maybe I don’t agree with the President’s reasoning on this decision. But I agree with his action.

This war has continued too damn long, with vague purpose, no end, and massive cost. Yes, things are bad over there, but we’ve accomplished little to improve it. And maybe Russia and Iran will gain dominion in that region, but so what? Let them have that morass. Let them pay the high international costs. It shouldn’t be America’s responsibility anymore.

Friday, December 21, 2018

Rudolph the Dog-Earred Stereotype


It’s Rudolph Meme season again! You know the one I mean (above), a cheap pseudo-Marxist reading of the song that suggests your value, if you’re different, derives from your instrumental utility. I remember figuring that reading out fifteen years ago, as a sophomore-level English major excessively proud of having discovered inequality in classic literature. And like undergrads everywhere, I had to share my discovery with everyone.

I was wrong. And dumb.

I've suggested recently that it does songs a disservice to hear them without their context. Words received at the surface level as pretty awful these days come from a time where those words meant something very different. Songs have unique problems, since we listen passively, often while doing other things. But that makes it more important, not less, that we understand the historical milieu from which music originates.

In Rudolph’s case, that milieu is the Great Depression, and its kissing cousin, World War II. “Rudolph the Red-Nosed Reindeer” began life as a coloring booklet published as a supplement to the 1939 Montgomery Ward catalog. For readers younger than me, the catalogs published by the Sears & Roebuck or Montgomery Ward companies were the Amazon.com of their day, making luxury shopping and top-of-the-range goods available to Americans living outside major cities.

“Monkey Ward,” as people called it, published the first Rudolph story, written and illustrated by poorly-paid copywriter Robert L. May. Here’s where we get the first glimmerings of Rudolph’s backstory: according to one source, Robert May grew up in a relatively wealthy family that lost everything in the 1929 stock market crash. May took the Monkey Ward job because he needed the money, a problem compounded by needing to cover his wife’s very expensive cancer treatments.

So May was Jewish, at a time when anti-Semitism was still commonplace and socially acceptable in America. He was poor, and not only poor, but newly poor. And he had cancer in his family, at a time when Americans said “cancer” the way we now say “HIV.” Safe to say, this guy had firsthand experience with outsidership in a closed society.

Yes, the other reindeer mock Rudolph for being different. But if we recontextualize Rudolph’s difference from mere physical freakishness, we get a completely different reading. If we read Rudolph’s red nose as symbolic of Jewishness, poverty, or both, we see something completely different. Rudolph isn’t a nonconformist, as the popular meme above implies. He’s a member of an oppressed minority, whether oppressed by White people or by capitalism.

Take it further. We don’t know much about Robert May’s personal life and knowledge, but it’s conceivable he saw the connection between the red nose, a highly visible symbol of difference, and the yellow stars forced on his fellow Jews in Europe. The song doesn’t specify what “reindeer games” Rudolph couldn’t join. But European Jews couldn’t hold certain trades, own their own houses, or save money past fixed limits.

The parallels start getting chilling.

Like you, I grew up watching the 1964 stop-motion Rudolph special, one of the oldest programs that still regularly runs on network TV. That depicts Rudolph as the victim of routine schoolyard bullying, a rebel against postwar America’s White suburban mediocrity. In some ways this makes him almost a proto-hippie. It’s no coincidence that this special hit American airwaves just as the first Baby Boomers began reaching adulthood.

But both this adaptation, and the song we sang (written by May’s brother-in-law Johnny Marks), came along later. The song didn’t appear until 1949, ten years after May’s coloring book. The TV special didn’t debut for a full quarter-century after May’s story. They’re both distorted by America’s involvement in World War II, and the national realignment that happened afterward. Our understanding of Rudolph comes from second-generation stories with their own agenda.

Robert May’s original story, read in light of the impoverished Jewish experience in 1939, feels completely different. Rudolph becomes somebody not just mocked and belittled, but actively oppressed, because a tyrannical majority believes the power structure they’ve created is foreordained. Rudolph needs rescue by the North Pole’s national leader. But that leader won’t buck the system unless imminent emergency forces his hand.

So yeah, Rudolph isn’t a Marxist finger exercise. He’s a plea for leaders to stand against systemic injustice—coupled with an acknowledgement that they won’t unless they’re forced to. In that way, when we look at increases in racism, poverty, and war-mongering in Earth’s wealthiest nations today, Rudolph the Red-Nosed Reindeer is arguably as relevant now as he’s ever been.

Follow-up: Rudolph the Dog-Earred Stereotype Part Two

Wednesday, December 19, 2018

Theatre of the Unremembered

Jonathan Gillman, Looking In

Many of us who participate in theatre have dreams of transforming the world around us. But it’s hard to change adults’ fixed minds, especially when faced with the constant demand for “entertainment.” If only we knew how to reach audiences before their expectations calcified, before they became deaf to art’s social potential. If only we could reach a broader, more diverse audience...

Hartford, Connecticut’s Looking In Theatre creates topical content for teenagers, by teenagers. Director Jonathan Gillman auditions interested teens from around the state, who want in for unique reasons. He doesn’t favor art stars, or teenage proto-celebrities who do well in high school theatre; he favors teens willing to imbue some aspect of themselves into their scenes. This means he often gets teens whose lives reflect the struggles they depict.

And what struggles those are. Gillman and his performers, who act without props or costumes or sets, create scenes where teenagers play teenagers, facing the problems which plague teens everywhere. From topics so familiar they’ve become banal, like drugs and peer pressure, to topics adults often fear to address directly, like relationship violence and rape, Gillman’s performers delve unflinchingly into issues teens face daily.

Gillman uses Second City-like techniques to guide his performers in creating their own content. By his own admission, if he wrote scenes for these actors, the performances would probably lack immediacy. But the performers choose topics close to their hearts, and work intimately to turn raw experience into something intense onstage. Gillman coaches them to reach the next level, but doesn’t force their content into staid directorial visions.

This book is Gillman’s account of one year directing Looking In. He admits, from the beginning, his account is lightly fictionalized, to protect his performers’ privacy. And what privacy they need to protect! The intimacy of dealing with other teens on topics of such searing urgency often results in personal disclosures. The boy concerned with bullying? He was locked in his own locker last year. The girl concerned with parental roles has two alcoholic parents.

Jonathan Gillman
Early on, Gillman emphasizes he doesn’t choose his performers for their personal problems. But troubled teens generally find themselves attracted to his program. One kid he describes was able to accept his own homosexuality after seeing gay characters depicted in his program as perfectly normal. Tellingly, this performer’s ability to accept his homosexuality gives him power to play some of the program’s most repellently heterosexual characters. Yay for art.

Gillman doesn’t burden readers with lengthy philosophical justifications or on ruminations about teen experience. Like his theatre itself, he doesn’t preach, he merely shines a light. However, for the more pointy-headed among us, his philosophical familiarity with thinkers like Paolo Friere, Bertolt Brecht, or Augusto Boal is visible. He creates art specifically to expose audiences to ideas and places they’ve never encountered before… importantly, without being boring.

The Second City comparison above isn’t flippant. Having guided performers through creating their individual scenes, Gillman also creates set lists so his actors’ separate creations come together into a single, unified play. Actors will carry one character across several scenes, allowing for development from situation to situation. This results, in Gillman’s telling, in a convergence of ideas, where audiences realize individual struggles reflect larger problems.

And the cumulative experience isn’t merely preachy. Gillman emphasizes the importance of humor in reaching audiences. Topics like, say, the pressure for sex and the liberty to refuse, can be funny, when stripped of ninth-grade health class pedantry. Gillman doesn’t want to heap topical performances on audiences’ heads simply because they’re important. He wants viewers to have a complete, immersive experience in teenage anomie.

Gillman doesn’t provide a handbook for creating programs similar to his. There’s no manifesto here, no list of meaningful improv exercises for creating characters or heightening situations. But he describes the rehearsal process in such exacting detail that others willing to recreate this experience at home could follow his guide. Thus Gillman comes across less as a prescriptive lecturer, than a mentor walking us patiently through the steps of program creation.

Don’t mistake this for “important” nonfiction. Gillman crafts a memoir of leading youths through the creative process, and their own struggles, with open heart and novel-like pacing. And he’s unstinting on his own difficulties, as a married man far older than his performers, continuing to lead these teenagers. Like his actors, Gillman crafts, though experience, something more profound than fiction could dump in our laps. In telling his story forthrightly, he encourages us to do better.

Monday, December 17, 2018

Star Wars and the Modern Mythology Battles, Part Two

This essay is a follow-up to Star Wars and the Modern Mythology Battles, Part One

All that being said, my ultimate problem isn’t between whether Disney Star Wars has changed too little, or not enough. My problem is, it presents an essentially solid-state universe, in which advancements from prior episodes don’t actually change much subsequently. This claim wouldn’t make sense with the pre-Disney movies, where the galaxy moved from democracy to tyranny, then back. But Walt’s people irrevocably changed the concept of justice as it applies in Lucas’ mythic galaxy.

Shortly after The Force Awakens debuted, I noted that movie established war as the galaxy’s essential state. Our protagonists can’t stop fighting, because the galaxy ordains them that essential role. Instead of accepting a well-earned appointment to the galaxy’s equivalent of the executive cabinet, Princess Leia returns to fighting the same war that’s dominated her life for at least forty years. Hell, even a military legend like Chesty Puller got to retire after thirty-seven years.

About three months ago, an internet critic calling himself Thor Skywalker noted that the Disney Star Wars works primarily by walking back everything the prior six movies accomplished, returning the characters to square one. I normally disregard self-made critics, because the internet gives every egotistical whiner a megaphone. (Don’t look at me that way.) But this person raises an important point I hadn’t considered: Disney Star Wars fundamentally changes what the prior movies were about.

George Lucas’ original movies contradicted themselves. His original trilogy pitted intrepid, individualist characters against an autocratic government, seeking to re-establish democracy. The prequels seemingly questioned democracy, and focused more on restoring balance to the Force. Either explanation makes sense, and either explanation is supported by the resolution of Episode VI. Sure, the prequels changed character motivations for Obi-Wan and Yoda, and often struggled with continuity. But ultimately, wobbles notwithstanding, they all reached the same destination.



Episode VII undermines everything. It changes the names of the war’s protagonists, and replaces Emperor Palpatine with Supreme Leader Snookie (or whatever), but it structurally declares that the first six movies accomplished nothing. By the end of Episode VIII, the Resistance, which is the Rebellion rebranded for 21st-Century America, is so small, its entire membership can fit inside the Millennium Falcon. Yet we’re promised that, in one more movie, this sub-arc will somehow resolve itself.

George Lucas famously claimed he wrote the original Star Wars script with two books on his desk: Webster’s Dictionary and Joseph Campbell’s The Hero With a Thousand Faces. The latter describes the mythological journey heroes in multiple religions and folk traditions undergo. The seemingly ordinary hero leaves home, journeys through the magical realm, and is transformed. But to Campbell, the final stage matters as much as the journey: the mythic hero must ultimately return home.

Most great mythologies spend as much time on the hero’s return as on his [sic] leaving and journey. Jesus must venture into the desert and be tempted, but must also return to carry the message. Sinbad must sail into lands of monsters and terror, but must also return to tell his story. And if Hollywood’s three-act structure gives short shrift to denouements, the idea that Luke Skywalker must return from war, remains at least implicit.


These two films have essentially stolen every accomplishment Lucas created. They destroyed two Death Stars? Ta-da, Starkiller Base! They ended the Sith? Eff you, say the Knights of Ren. Everything that we’d consider an accomplishment has been reversed. The characters cannot return from war. They either keep fighting until mortality catches them, like Princess Leia, or they find ways to retreat from reality, like Han Solo returning to smuggling, or Luke and his self-imposed exile.

In the first part of this essay, I complained that the Disney Star Wars “feels bound by cultural impediments it doesn’t actually have.” This meant the values of what defined “normal” in 1977, specifically White, don’t apply forty years later. Here I expand that meaning to say: the cultural myth that mattered in 1977 isn’t the myth that defines us in 2018. Lucas, writing in Vietnam’s immediate aftermath, needed to reckon with war and identity.

Though military embarrassments in Iraq and Afghanistan haunt America today, the cultural question is: how to govern a shattered, self-destructive coalition of millions? The Disney Star Wars could’ve made a powerful, profitable myth from Han, Leia, and Luke rebuilding a galaxy left polarized and distrustful by the Empire’s retreat. But Disney, the cultural juggernaut that, with its Fox acquisition, could soon control American culture, flees the present. The result isn’t mythology, it’s a time capsule.

Monday, December 10, 2018

Star Wars and the Modern Mythology Battles, Part One

Passing the baton of conflict onto the next generation

To call Star Wars fans’ reaction to recent movies, and in particular The Last Jedi, “mixed,” would be an understatement. The movies have indeed breathed new life into a franchise over forty years old, and garnered new generations of fans. However, the death of founding heroes earned well-deserved pushback from the first generation of fans, and the way the stories have thus far basically recycled tropes from the original movies show catastrophic lack of imagination.

This first-generation fan has mixed feelings. Certain complaints against the third trilogy are completely misplaced. As I’ve written elsewhere, complaints against Han Solo’s death miss the point: Solo lived a life standing against injustice, whether through economic resistance as a smuggler, or military resistance as a Rebel. He dedicated his life to opposing injustice, and earned a heroic death, standing up, speaking the truths of his heart. If only the franchise actually let him die.

More petty opposition has more destructive consequences. The flippant online harassment campaign against Kelly Marie Tran, co-star of The Last Jedi, revealed latent racism and sexism at the heart of Star Wars fandom. A handful of people were plain-spoken about hating a WOC in their beloved franchise, but far more used dog-whistle language. One-by-one, they could plausibly deny their racism. Lumped together, the bigoted pattern becomes impossible to disregard… though some insist on doing so.

This hatred has long roots in Star Wars. Given his incorporation of Buddhist and Islamic mythology into the original trilogy, I’d have difficulty justifying calling George Lucas racist. However, the first movie, latterly retitled A New Hope, originally had almost no non-white characters. While Lucas was perhaps not personally racist, he shared the institutional racism of 1977, an attitude that presumes White people are “normal,” and POC are “racial.” This attitude is tough to shake.

When several critics, including “Miss Manners,” Judith Martin, pointed out his galaxy’s chillingly Caucasian complexion. Lucas had dignity and honesty enough to correct himself. But his corrections still existed within the context of post-Vietnam Hollywood, and therefore didn’t change much. And improved less. He started by incorporating more Black actors into the background of action scenes: there are multiple Black Rebels on Hoth, and of course, they die like flies. As they do in Hollywood.

But Lucas also invented Lando Calrissian, who isn’t a particular advancement. Lando’s goals, besides collaboration (read: assimilation) with the Empire, include an attempt to seduce a White woman. Maybe this isn’t as problematic in a galaxy far, far away, but Lucas basically retreats here into common racial stereotypes. Lando Calrissian basically resembles an escapee from a Schlitz Malt Liquor ad, a depiction not improved by Donald Glover’s sexually omnivorous depiction in the Han Solo movie.

Rey and Finn fleeing the Luftwaffe—erm, I mean TIE fighters—strafing their squatter camp

So yeah, questionable racial depictions run deep in Star Wars. This attitude carries into the newest trilogy, where the Asian woman adheres to a mystical woo-woo religion, and a Black woman, Lupita Nyong’o, appears only as voice to a computer-generated character. This pattern doesn’t mindlessly reiterate the attitudes of the original trilogy, as one central character is a Black man. But it accepts the idea that White, even in a distant galaxy, is somehow normal.

When Doctor Who cast a woman, for the first time in over fifty years, in the lead, I took exception when activists complained that the character was still White. The “C” in “BBC,” I noted, stood for “Corporation,” and the BBC needed to acknowledge its corporate customer base, which is still preponderantly White. I granted Doctor Who latitude in reinventing its lead—especially after Peter Capaldi’s final episode, which addressed the character’s historic shortcomings directly.

I’m less forgiving of Star Wars for one reason: Rey, Finn, and Poe are different characters than Luke, Leia, and Han. Mark Hamill didn’t regenerate into Daisy Ridley; new actors were cast to play entirely new characters. They aren’t bound to the same kind of historic baggage the Doctor has. The Disney Star Wars unfortunately feels bound by cultural impediments it doesn’t actually have. And I can’t easily overlook the problems this causes.

George Lucas’ original Star Wars trilogy remains a landmark of American mythology… when viewed from the context of 1977. I’ll overlook certain limitations that transgress my ethics, because 1977 was a different planet. (Rimshot.) I’m less forgiving of the new trilogy because both its creators, and “fans” resisting even minor changes, espouse a social code that doesn’t exist anymore. Mythology, in the Joseph Campbell sense, is timeless, but arises from its time. And it isn’t 1977 anymore.



TO BE CONTINUED

Friday, December 7, 2018

Wilderness Planets and the Human Frontier

Kathy Tyers, Shivering World

Dr. Graysha Brady-Phillips never intended to become a frontier scientist, but she needed the money. She needed something else the colonists on planet Goddard, orbiting Epsilon Eridani, might offer: their renegade geneticists might have a cure for her incurable chronic illness. But to achieve that cure, she must earn the colonists’ trust, no mean feat when your mother actively prosecutes geneticists. And when your planet may be terminally cooling.

Novelist Kathy Tyers, like her protagonist, trained as a microbiologist, then rediscovered herself as a schoolteacher. This novel brings these threads together: Graysha works as a soils specialist, but gets her greatest joy from teaching colonist technicians to become scientists themselves. And when her frontier job assignment becomes a murder mystery, she investigates the crime using the best tool available to her: science and hard facts.

Planet Goddard is an ice ball on the fringes of human exploration. There’s no hyperspace in Graysha’s world; travel between planets requires a long, slow, Oregon Trail-like slog. This means everything is literally life-or-death, and some attempts at terraforming worlds have resulted in tragedy. This reality is compounded, in this setting, because the terraforming agency is a for-profit corporation, and if frontier worlds can’t make bank, they aren’t worth working.

Brady-Phillips, however, believes human life worth living. That includes her own. Raised in her mother’s imposing shadow, Graysha wants a quiet life, teaching children and eventually having her own. Her genetic death sentence only compounds her problems. But a handsome colonial administrator with high ambitions reawakens her big-picture passions, especially when she discovers he possesses a spiritual confidence long missing from her life.

Kathy Tyers
Graysha’s mother runs the powerful Eugenics Board. Unlike past eugenicists, this board wants to prevent tampering with human genetics; a rogue sect of genetically engineered humans, in this world’s past, attempted to hijack human evolution; the Eugenics Board, backed by a powerful state church, wants to prevent that ever happening again. But that means they cannot countenance any human genetic research, even when a humble schoolteacher faces a terminal condition.

If this seems like many, confounding threads, I won’t disagree. Tyers offers a massively complicated plot, so intricate I haven’t even introduced every thread. She introduces a world where advancing technology has altered everything it touches. Though her story foregrounds a scientist, working in her laboratory, Tyers displays character conflicts and an complex community, battles of religion and faith, and even pop culture’s corrupting influences. The complexity is stunning.

Frontier mythology often looms large in science fiction: sometimes blatantly, as with Han Solo or Malcolm Reynolds, but more often covertly. But despite the promise of wild worlds untamed by human hands, it frequently rebounds onto worlds already occupied by native species or prior colonists. (Indeed, like westerns.) Tyers, however, puts focus onto a planet previously unoccupied, and the tedious labor necessary to breathe life into a lifeless rock.

Besides this, I must acknowledge Tyers’ attention to a realistically crafted chronic illness. Her heroine has a condition she know will eventually kill her, and which she’ll pass to her children, should she ever have any. She’s designed elaborate and subtle ways to keep moving when her illness burdens her beyond coping, and has found work-arounds to avoid the outside moral judgements people often heap on the suffering.

All that being said, I particularly admire Tyers’ commitment to anchoring her storytelling on science. Graysha Brady-Phillips is a serious working researcher, specializing in life far removed from terrestrial origins. What goes into taking a lifeless world and making it settleable? Besides her own background, Tyers’ acknowledgements page lists several researchers she consulted, giving this story a firm scientific milieu, avoiding the rococo science favored by many paperback authors.

Tyers first published this novel in 1991, to moderate acclaim. Following her first husband’s passing, Tyers went into semi-retirement for several years, and re-emerged by revising several of her novels, emphasizing spiritual themes previously only latent in the story. This particular novel she re-released in 2004, and again in 2018. I never read the original, but appreciate how this revision investigates spiritual themes, without overwhelming real-world science.

I like saying a novel invites a large, diverse audience. Tyers writes the intrigue, action, and romance that novel readers desire, but grounds it in real science and plausible speculation. She’s created what Tolkein called a “secondary world,” a setting so complete and consistent that audiences feel we’ve traveled there. I grew up on science fiction, but have grown jaded recently. This feels like the books I grew up reading.

Wednesday, December 5, 2018

Baby, It's Pretty Cold In Here, Too

Frank Loesser and Lynn Garland

Seems like every December anymore, we have the debate: is “Baby It’s Cold Outside” offensive? The winter jazz standard, by Guys and Dolls composer Frank Loesser in 1944 to sing as a novelty duet with his wife, Lynn Garland, has become infamous for a man plying a woman with alcohol and persuading her to stay with him overnight. Several radio stations have banned the song, but a few have reversed themselves under public pressure.

The controversy, which has simmered for years but really only opened up to widespread public awareness in the last five years or so, turns on how we should interpret Loesser’s narrative. On the one hand, the male voice’s disdain for the female voice’s protests, while she sings things like “What’s in this drink,” suggest he’s manipulating her. Some critics have called him a sexual predator. I’ve heard some amateur pundits call him a date rapist.

By now, we’ve all heard the counterargument, that the song is laden with post-Depression stock jokes and sexual politics. Women back then couldn’t outright consent, because of social pressures; the female voice must perforce couch her desire to stay with her lover in acceptable protests. The song advances through a sequence of arguments that Loesser’s first-generation audience would’ve found predictable and funny, and finishes with the pair singing in beautiful, consenting unison.

This year, I encountered a new counter-counter-argument, that art, like language, changes. Our definition of acceptable sexual behaviors has evolved, we now believe men should take women’s word at face value, and clinging onto a Howard Hawks-era screwball comedy interpretation yokes audiences to the past. Let things go, we’re told. The world changes, and our artistic interpretation criteria need to change with it.

I can’t completely accept this argument. All art reflects the context in which it’s created. If we cannot require audiences to understand the social milieu, we cannot really appreciate any art from a prior generation. From Ovid and Shakespeare, to sacred texts like the New Testament and the Koran, works that were downright progressive in their time look reactionary and knuckle-dragging when pulled from that historical context.

(This is especially problematic with holy texts. Whenever fundamentalists say “I’m just saying what the Bible says,” they miss an important point: everything in the Bible was written in response to something else, and that something else might not be explicit in the text.)



However, that being said, people don’t perceive all art equally. When people read books or watch plays, they tend to focus on that effort exclusively; with the exception of audiobooks, people don’t, indeed can’t, read while doing something else. By contrast, audiences frequently perceive music passively, putting the radio or iTunes on for background noise while cleaning, doing homework, or hosting a party.

So maybe, because “Baby It’s Cold Outside” doesn’t jump the same mental hurdles that “Much Ado About Nothing” does, we should accept that people cannot comprehend its context, at least not without significant coaching. It’s a top-40 song, not High Art. Yet I can’t accept that either. I’ve argued with loved ones about how to interpret popular songs before, especially when they express outdated sexual mores.

As long as “classic rock radio” exists, new generations can discover songs like Bob Dylan’s “Don’t Think Twice, It’s All Right” or Marshall Tucker’s “Heard It In a Love Song,” both of which, by today’s standards, are aggressively disrespectful to women. I’ve argued that we should hear both songs in their historical context. Forcing our understanding onto these, or any, songs, sucks the artistry out, while depriving us of opportunities to step outside ourselves.

That, though, may be the most important factor in this debate: while we’ve become more attuned to people in our own context, more conscious of (for instance) women as independent beings with their own sexual motivations, we’ve also become willfully blind to anyone different. That’s why kids flip their shit over Halloween costumes, and we insist cute postwar novelty songs mimic current social mores no matter what. Because everyone has to think, act, and be like us.

These debates matter, because how we interpret the outside world becomes, ultimately, how we interpret ourselves. We have an obligation to understand other cultures, peoples, and times in their own terms, not according to standards we impose on them. Because the minute we begin insisting everyone conform to us, we become yoked to how we think in some particular moment. We become rigid and hidebound. And inevitably, we get overtaken by changing social mores, too.

Monday, December 3, 2018

The Transforming Journey of Tom Waits

1001 Albums To Hear Before Your iPod Battery Dies, Part 13
Tom Waits, Mule Variations

Despite his outsized influence in America’s singer-songwriter community, Tom Waits hasn’t been particularly productive throughout his career. He regularly goes five or more years between albums (at this writing, he hasn’t released any new content since 2011). At times he’s gone over a decade between tours. His singles generally don’t chart. And when he does record, his works are so eclectic, they’re impossible to market and seldom find an audience.

This, Waits’ best-selling album, won him his second Grammy award, for Best Contemporary Folk Album. On first consideration, this seems an unlikely category. The album begins with “Big In Japan,” a prime example of Waits’ screaming-and-hollering style he perfected in the late 1970s. Most lyrics on this album have a sidelong, beatnik jazz style, and the instruments have a lo-fi sound more common to indie rock than anything folk oriented.

Yet I’d contend this album captures the folk ethos perfectly. Waits picks sounds that have entered the American cultural consciousness from various genres, and have become part of our shared experience as a people. From these elements he creates a collage of distorted sound and off-kilter lyrics that reflects we, the audience, back at ourselves, just mangled enough to remind us how crooked we are, below the surface.

Designed to fill an entire CD, a feat most singer-songwriters still avoid, this album sprawls across the influences that Waits has used throughout his career. “Georgia Lee” and “”House Where Nobody Lives” have the slow-moving gait of classic country blues, while “Get Behind the Mule” and “Cold Water” have a harsher edge, a desire, seemingly, to reprimand the listener. The shift of influences seems designed to keep us back-footed.

Other tracks don’t fit genre molds as easily. “What’s He Building?” is a strange prose poem with a background reminiscent of a horror film scored by Mike Oldfield. “Black Market Baby” and “Filipino Box Spring Hog” don’t sound like any established market niche, more like wall-to-wall sonic chaos captured near closing time at a bar where a band has stopped caring what the audience thinks. Waits never stops being musical, but frenzy often overcomes harmony.

Tom Waits
Like most Waits albums, this one produced only one retail single, “Hold On.” This song more resembles the romantic baritone Waits tried to emulate in his earliest albums, which he eventually dropped. Is he deliberately trying to create something radio-friendly, to trick listeners into buying something they don’t anticipate? Perhaps. Or he’s channeling his inner San Diego middle-class youth, and the studio thought the single might sell.

Fans argue about how to interpret “Chocolate Jesus,” the track Waits famously performed on Letterman. Is Waits being deliberately sacrilegious? Is he disparaging Christianity? Waits has been notoriously elusive and contradictory about his spiritual roots. However, in light of this album’s closing track, the rip-roaring barrelhouse gospel sing-a-long “Come On Up To the House,” I’m inclined to suspect “Chocolate Jesus” reflects ambiguity about commercialized Christian trappings, not Christianity itself.

Waits didn’t assemble a studio band for this album. Instead, he employs an all-star ensemble of rotating guest artists, including Charlie Musselwhite, Les Claypool, John Hammond, and Marc Ribot. Most songs involve fewer musicians than you might expect; the dynamic sound arises from the energy of the playing, not the number of instruments being played. Waits uses multitracking far less than most contemporary artists. The result is austere, but frenetic.

Taken together, the album’s various tracks create a sonic landscape as uneven as any national park. We careen through the categories of American music, never allowed to settle on one genre long enough to get comfortable. We reach this album’s final, percussive chords, confused about where we’ve been, but confident we’ve taken a journey. The experience has certainly changed Waits, and probably us too. And we’re ready to go again.

Various record labels keep releasing Waits, but always struggle to market his genre-resistant compositions. Upon release, this album did moderately well, despite lacking support. Rolling Stone gave it three stars on initial release, but apparently changed their minds later, granting four-and-a-half stars in their later album guide, and deeming this one of the 500 most important albums ever released. Which reflects Waits’ usual reception: initial confusion, followed by acclaim.

Not everyone likes Tom Waits. His coarse voice and eclectic style often discourage new listeners. But audiences willing to persevere will find Waits an experience they revisit time and again. This album rewards multiple listening, and provides unexpected insight beneath its layers. And it is folk music, because it’s about its audience’s experience.

Thursday, November 29, 2018

The American Message Machine



Hank Williams, Jr., released a song in the late 1980s entitled “USA Today.” He never released it as a single, and it never charted, but it nevertheless got some limited radio airplay at a time when his Lynyrd Skynyrd-lite sound dominated country music airwaves. I remember hearing it distinctly: the verses laid out a laundry list of supposed American cultural failings, not unlike his oft-repeated hit “A Country Boy Can Survive.” Then he launched into the chorus:
It's true we've got our problems, Lord knows we make mistakes
And every time we solve one, ten others take its place
But you won't see those refugees, headin' the other way
Welcome, to the U.S.A. today
I couldn’t help remembering these words last week, when news trickled in that American forces massed along our southern border fired tear gas canisters across the border, into Mexico, to disperse refugees, mostly from Honduras. This struck me as wrong on multiple levels. Since when does America keep standing troops along our land borders? Since when do we fire CBRN ordnance across the border into non-combatant nations?

And since when do we want to discourage refugees?

I’ve written before that I grew up conservative, surrounded by the right-wing values of family, church, and country music. We believed that government derived from moral authority, and therefore had moral authority itself. We believed that full-time work deserved a living wage. And we believed that the continued movement of refugees into America meant we were doing something right, that other nations were doing wrong.

This isn’t an exaggeration or metaphor. We believed that people coming into America to escape the hardships and terrors of life in their homelands meant America was uniquely blessed to handle complicated world situations like diplomacy and war. Yes, America had, for instance, gotten significantly drubbed in Vietnam. But the continued existence of Vietnamese boat people after the war proved we’d won the peace. Anyway, that’s what we told ourselves.

Equally important, it meant we knew the world was watching us, and we took pride in what they saw. We held ourselves up as paragons of moral virtue, economic justice, and opportunity. People came to America for various reasons, such as religious liberty, freedom from ethnic persecution, or simply to get a job. And we congratulated ourselves for being the one nation where all peoples and creeds believed they could find these things.

It didn’t hurt that the Cold War was happening. Professor Ibram Kendi, in his award-winning history book Stamped From the Beginning, notes that when John Kennedy and Lyndon Johnson threw their support behind Civil Rights in the 1960s, they didn’t do so because it was right. Johnson in particular had previously been aggressively anti-civil rights, until he did an abrupt 180. And he did so because he knew the Soviet Bloc was watching.


America’s victory in the Cold War was a PR maneuver as much as anything else. We showed our ability to juggle economic liberty, military supremacy, cultural diversity, and a willingness to change when we were wrong, which the Soviet Bloc just couldn’t. Peoples throughout the world witnessed America’s dexterity, decided they wanted to be like us, and fled to our shores, calling for asylum. And, for a while, we willingly gave it.

Our current government came to power through promises of restored greatness and national identity. Exactly when this greatness happened, or what it looked like, we couldn’t say, but we knew America loomed over the world somewhere in history, and we yearned to recapture that moment. Yet this government has forgotten that greatness stemmed not just from some in-group identity, but from the message we showed the world.

That message was that our actions followed our words. If we wanted European powers to divest their African and Asian colonies, and grant liberty to refugees fleeing to European shores, we had to do likewise. During the Reagan years we too pride (often grudgingly, but still) in accepting refugees from America’s involvement in countries like Vietnam, Guatemala, and Iran. We even lauded some refugees as heroes of bedrock American values.

It hurts to say that global peoples still witness America, and know exactly what messages we carry. We tout economic liberty, but cut our poorest loose. We drop the Mother of All Bombs in Syria, then tell Syrians to go back home and apply for refuge at the embassy. Our PR machine is broken. We need to start caring what message we’re selling the watching world—and ourselves.

Tuesday, November 27, 2018

The Force That Was Jimi Hendrix

1001 Movies To Watch Before Your Netflix Subscription Dies, Part 29
Joe Boyd, John Head, & Gary Weis (directors), Jimi Hendrix


It’s difficult to believe, given his outsized influence in multiple musical genres, that Jimi Hendrix’s mainstream success lasted only four years. He went from the blustering rebel smashing his guitar at Monterey Pop, to a solemn icon playing with quiet intensity at the Winterland Ballroom, to an early grave, faster than some artists today produce their second album. He left behind a lifetime’s output, and he changed lives along the way.

Filmmakers Boyd, Head, and Weis began creating this rocumentary shortly after Hendrix’s abrupt passing. They bring together both original and vintage interviews with concert footage to eulogize an artist whose passing was still recent. The fresh hurt on many interviewees’ faces is palpable. But the music Hendrix created remains fresh and powerful; the licks from tracks like “Machine Gun” and “Purple Haze” still influence musicians today.

This documentary begins with the assumption you know who Jimi Hendrix is. It launches with concert footage of Hendrix deliberately overwhelming his speaker stacks, getting the well-modulated pop and whine that made his guitar work groundbreaking. (Hendrix didn’t invent feedback, Ike Turner did, but Hendrix exploited feedback’s popularity for everything it’s worth.) Then, when you’re good and rocked out, it launches into serious journalism mode.

Interview subjects include people who knew Hendrix personally, before his early fame, and also musicians, mostly British, who felt his impact on their careers. Little Richard, in whose band Hendrix paid dues; Pete Townshend, who viewed Hendrix as a rival at Monterey; Eric Clapton, who started off seeing Hendrix as somebody who stole his licks, then became an ardent admirer. Hendrix clearly changed every life he touched.

But though it’d be easy and cost-effective to interview well-known musicians, these filmmakers throw the net wider. They interview people Hendrix knew and worked with, people who loved him before he became famous, people who guided his musical sound as he guided theirs. Childhood friends, Army platoon mates, influential DJs, old lovers, bandmates. Diverse voices combine to tell Hendrix’s story in ways he, being absent, cannot.

Jimi Hendrix sets fire to his already-smashed guitar at Monterey Pop (click to enlarge)
Al Hendrix, Jimi’s father, provides important context for Jimi’s struggling childhood. He discusses how Jimi, a desperately shy child, found identity through playing guitar; if ordered to clean his room, young Jimi would strum his broom, Al says. But Al, a notorious alcoholic with a temper, also struggles through his narration, his words audibly slurring. It’s difficult not to wonder how reliable a narrator poor Al actually is.

Our filmmakers match their interviews with well-chosen concert footage. When Pete Townshend talks about his battle with Hendrix over playing order at Monterey Pop, the camera jumps directly to Hendrix’s legendary performance, where he played “Wild Thing,” culminating in smashing his guitar. Early girlfriend Fayne Pridgeon talks about almost getting evicted behind Jimi playing Bob Dylan at full volume, leading into “Like a Rolling Stone” at the Isle of Wight.

And what concert footage! Hendrix remains famous for early, high-energy recordings like “Wild Thing,” but that represents one fraction of his output. His performance of “Machine Gun” at Winterland, in which he mostly stands still and plays with understated gravity, is downright entrancing. And a vintage jam where he plays “Hear My Train a’Coming,” a concert staple he never recorded to his satisfaction, on an enormous acoustic twelve-string, brings chills.

Ironically, Hendrix himself probably wouldn’t have approved this eulogy. Archival interviews show his disdain for Q&A repartee. Doing panel with Dick Cavett, he subverts every question Cavett asks, leaving the interviewer stumped. Another anonymous off-camera interviewer questions whether smashing his guitar is a “gimmick”; Hendrix disparages the idea of gimmicks altogether, and says destroying a guitar is neither better nor worse than destroying a Vietnamese village with napalm.

This film substantially reflects its time. Released in late 1973, just months after Operation Homecoming basically ended the cultural moment we call the 1960s, its ethic was already outdated upon debut. Interviews with Buddy Miles, Noel Redding, and multimedia pioneers Arthur and Albert Allen, bespeak a street-fighting attitude that probably made sense one year earlier. Various attempts at hippie-era pop philosophy reflect how the Woodstock era was already dying.

Yet it also reflects how much Hendrix himself breathed life into that period. Some of his live performances and interviews included here were recorded mere weeks before his sudden passing in 1970. Though Hendrix’s survival wouldn’t have prevented the 1960s ending, it’s tempting to wonder exactly how culture might have changed. But, like a supernova, the light of Hendrix’s burning continues shining long after the source itself has burned out.

Tuesday, November 20, 2018

Building an Economy From the Soil Up

1001 Books To Read Before Your Kindle Battery Dies, Part 94
Wendell Berry, What Matters? Economics for a Renewed Commonwealth

What would a fair and just economy look like? This isn’t a new question. It isn’t even new since the Great Recession, when reckless speculation proved much American economics was founded on air. People of wisdom and learning have asked that question since at least Adam Smith and Karl Marx, and come no closer to an answer that satisfies everyone. Poet and farmer Wendell Berry suggests we’ve been looking in the wrong direction.

Berry, who has worked the same stretch of Kentucky highland his entire life, grounds his economy in judicious management of resources; and for him, the foremost resource is land. His use of “land” broadly encompasses water and air, forests and pastures, which humans must manage, not merely use. Humans arise from land, and humans create money; any economy that places money first inverts, and thus destroys, the natural order.

America, and the world generally, has fallen under sway of “autistic industrialism,” in Berry’s words, a laser-focused belief that man-made technologies will solve everything. This finds its apotheosis in a financial services industry that sees its dollar-sign output as superior to whatever it places a price on. And it works exclusively through creating ever increasing demands: Berry writes, “Finance, as opposed to economy, is always ready and eager to confuse wants and needs.”

Likewise, this economic model has concentrated land-use decisions in the hands of putative experts who don’t work the land. Chemical companies, government bureaucrats, and absentee landlords uniformly overrule the hard-won experience of people who once owned and husbanded the land they worked. This creates a vast gulf between who receives the short-term benefits of land exploitation, and who pays the long-term price.

This book comprises two sections. The first, shorter section involves five essays and speeches Berry delivered in the immediate wake of the 2008 financial services disaster. His primary target, however, isn’t bankers; it’s a culture-wide malaise that systemically mistakes money for value. This produces a breakdown that isn’t merely industrial, but moral: “We tolerate fabulous capitalists who think a bet on a debt is an asset.”

Wendell Berry
Berry’s second section involves older essays, dated from 1985 to 2000, about ways he sees an intimate connection between American economic values and our respect for the land. He witnesses how agricultural mismanagement has resulted in massive topsoil erosion; how mineral mismanagement has strip-mined coal, leaving behind hollow mountains and toxic rivers; how community mismanagement has separated individuals from the neighbors they live among.

These aren’t incidental losses, either. We cannot replace lost topsoil by dumping new dirt and saturating it with petroleum-based fertilizers, because topsoil is a complex relation of earth, organisms, and organic matter, that we don’t really understand. But those who profit from mismanagement care little for understanding, Berry writes: “The advocates and suppliers of agri-industrial technologies have encouraged us to think of agriculture as an enterprise occurring on top of the ground.”

Berry admits revising America’s economy to reflect the values it has already consumed won’t be easy. In some places he says he doesn’t know how economists could implement his vision. In others he has some insight, but admits the needed changes won’t be easy. They mostly involve changes in ownership and allotment: local businesses, local people, and smallholding farmers who own the land they manage.

This necessarily involves placing value on the invaluable. Currently, traits like family and community, or the good of preserving heirloom skills, have no economic value, because they don’t have a GDP price. But we’re currently witnessing the long-term consequences of their loss, in devaluation of land, business, and family; in rising health costs for people worked like machines; in food substantially lacking nutritional value.

This book doesn’t involve much factual data-wrangling; Berry’s interest lies in philosophical foundations, not actuarial spreadsheets. However, I cannot ignore statistical evidence demonstrating his accuracy. Journalist George Pyle has written that postage-stamp-sized farms pay their own bills, avoid chemical runoff, and remain in families better than massive, 10,000 acre industrial farms. Small, distributed agricultural economies have measurable value, but only when we measure in the long term.

Berry’s essentially Distributist model would seem familiar to G.K. Chesterton, whose motto, “three acres and a cow,” makes sense here. (“Forty acres and a mule” wouldn’t be inappropriate.) To Berry, like Chesterton, economic justice requires humans to directly control, and make decisions about, their living, and both philosophers see such control beginning with how we manage land. The ripples, though, extend throughout the economy.

Thursday, November 15, 2018

When the Fires of Anger Burn Out



I had intended to write today about the notorious Baraboo, Wisconsin, high school photo, which shows nearly the entire male class of 2019 flashing a straight-armed Nazi salute. I wanted to discuss how teenagers think asshole behavior is funny, and lack latitude of view to understand actions in historical context. I wanted to say we could demonize these children, possibly for the rest of their lives, or have a classic teachable moment and maybe improve society overall.

Then something happened I didn’t expect: America forgot the story.

Seriously. Two days ago, you couldn’t move on Blue Facebook or Blue Twitter without tripping over a half-dozen reposts of the photo, usually with captions of undiluted rage. How could anybody, especially in a post-Charlottesville environment, think such behavior was funny? Then it stopped. As I write this, another story hasn’t erupted sufficiently to really replace Baraboo as headline fodder, yet somehow, this story just petered out.

What happened? The story didn’t go anywhere. Baraboo police still promise a thorough investigation, which I applaud, since it clearly means all real violent crime in Wisconsin has been resolved. (C’mon, is this really the best use of police resources.) A Google news search reveals new details dribbling out about the story, mostly in regional outlets. But in the collective memory emblazoned on social media feeds, this story has already found a quiet corner to die.

Has our communal attention span become so brief that stories burn this quickly and vanish? Yes. This isn’t news to me. I’ve written before about how our capacity for outrage has become not only short-lived, but weirdly selective. We burn white-hot with indignation, spew largely crinkum-crankum bile in public places, and exhaust ourselves within, apparently, minutes. We lack capacity to keep stories burning low and constant long enough to actually do anything.

This causes further problems because it depletes our common capacity to keep real stories alive. While Baraboo sucked all the oxygen from the room, mainstream media apparently forgot, say, Jamal Khashoggi, whose murder, apparently ordered at the highest levels by one of America’s biggest trading partners, demonstrates just how dangerous the truth-telling industry remains in today’s world. Yet that developing story has become page-eight filler news.

(Yes, that’s a newspaper reference. I’m old.)

Yet perhaps that’s the problem. The American President has openly daydreamed about physically attacking journalists, called them “the enemy of the people,” and lavished praise on politicians who enact his violent fantasies. While the worst his administration has directly done is yank a journalist’s credentials over bullshit accusations, the implication nevertheless remains: we retain the option of capping your ass, Khashoggi-style, if you step out of line. We can make you wish you were dead.

This photo, by contrast, is safe. Everyone from MSNBC to Breitbart has covered this story, because they know moral outrage sells, but also because the story is low-risk. Disparaging American Nazis has been low-hanging fruit at least since The Blues Brothers used them as doofus villains in 1980. Media know they can gin outrage by thrusting stock villains under our noses, which accomplishes what they really want: to sell advertising space to the highest bidder.

And Baraboo itself serves a bicoastal narrative. A small town in a small state that few outsiders ever visit, Baraboo, Wisconsin, has under one-tenth the population of Manhattan’s Washington Heights neighborhood, and is primarily famous as the former headquarters of the defunct Ringling Brothers organization. National corporate media waving around the youth of a small city in an outlying province of “flyover country” reminds coastal city-dwellers of their unique privilege and its accompanying noblesse oblige.

I get frustrated when people tell me certain stories are mere smokescreens. Two years ago, it chapped my ass when pundits told me not to worry my pretty little head about the President-Elect’s theatre tweets. Yet that exact response is appropriate now, because the widespread popularity of the Baraboo story right now, when journalists are dying for telling the truth and an accused rapist sits on the Supreme Court, is the epitome of a smokescreen.

If social media means anything, we politically engaged citizens must start being more discerning in what stories merit our scrutiny. Truth isn’t criterion enough anymore. Humans’ ability to pay attention is finite, and if we expend it on low-stakes stories that burn out overnight, taking real journalism with them, the people who dominate our culture will eventually be able to get away with murder. Which, if current events are any guide, isn’t a mere metaphor.

Monday, November 12, 2018

God's Not Dead, but the Church Might Be Sick

Promotional photo from the film God's Not Dead

I’ve never seen the movie God’s Not Dead or its sequels. I have no intention of ever doing so. I realize many people like these movies and draw hope from them, and I also recognize that they represent the leading edge of a Christian indie movie industry that’s massively lucrative: the first God’s Not Dead film grossed $62 million on a $2 million production budget. Yet, as a Christian myself, these films irk me endlessly.

And I’ll tell you why.

When my sister was seventeen and shopping for colleges, she settled on two. She ultimately wound up attending the state university a few hours’ drive from home, partly because that’s what the family could afford. But she really had her heart set on attending Northwestern College of Orange City, Iowa. This private liberal arts college is institutionally affiliated with the Reformed Church in America and has Christianity in its curriculum.

My sister (whose permission I don’t actually have to retell this story) was massively excited by Northwestern’s explicitly Christian content. She loved that Orange City, itself a heavily Dutch Calvinist community, largely closed down on Sundays so everyone could attend church. The idea of surrounding herself with all Christianity, all the time, while pursuing her liberal arts credentials, got her downright giddy. She really wanted this entirely Christian education.

Her bank account, let’s say, didn’t.

When she discovered there was no chance whatsoever of affording private-college tuition, she became paralyzed with tears. I remember her standing in the family kitchen, weeping like her dog had died, half-screaming: “If I go to a state university they’ll tell me to go stand over in the corner and keep quiet and never say anything because I’m a Christian.” The mix of rage and grief in her outburst was palpable. So was the baloney.

Architectural drawing of the new Learning Commons at
Northwestern College, Orange City, Iowa

I was more conservative then than I am today, certainly. But I’d also perused both schools’ catalogues, and knew the state university had religious options available. Certainly, as a public institution, it couldn’t offer theological or seminary training, and was too small to host a religious studies program. But it had courses in biblical literature, the social science of religion, religious psychology, and more. And it hosted, though it obviously couldn’t sponsor, several on-campus ministries.

Yet for years, in our small-town congregation, we’d gotten barraged with messages about the inherent depravity of secular education. Well, all worldly institutions, really, but state-sponsored schooling was the nexus through which everybody passed, and therefore, the first test of godless secularizing mind control the good Christian had to surpass. Getting through high school, in many people’s understanding, meant walking through a purifying fire and emerging holy. Going back for more public education? I never.

Please understand, we weren’t part of some hyper-partisan fundamentalist sect. We attended the United Methodist Church, a centrist denomination that, among other things, tried (unsuccessfully) to censure Jeff Sessions for his behavior in authority. Yet, as often happens in small communities, a lack of diversity meant people became more extreme and intolerant in their opinions. From politics and religion to loyalty to the Denver Broncos and country music, everyone generally became more rigid, not less.

But this moment forced my first meaningful break with my childhood certainties. (Childhood, yeah. I was 23.) Seeing my sister paralyzed with grief because she had to attend public university, like three-quarters of Americans and most future pastors, struck me as beyond odd, especially as she’d had a fairly successful campus tour. She’d internalized the popular narrative of modern Christian persecution. And in her mind, months in advance, it had already begun happening to her.

Asia Bibi, a woman who actually, literally fears for her life because of her Christianity

Please don’t misunderstand me. I know Christians in this world are still persecuted. As I write, Asia Bibi, a Pakistani Christian, has narrowly evaded the death sentence for “insulting the Prophet,” an accusation the high court admits is probably fake; she still lives with constant death threats. There are places in the world where Christians have to fear violence daily. America isn’t one of them. Having to attend public education isn’t a human rights violation.

Yet a constant propaganda assault has convinced some people, unable to comprehend their own blinders, that that’s exactly what’s happening today. Mass media like God’s Not Dead convince believers to see conflicting views not as challenge and learning, but as attack and oppression. Years later, my sister doesn’t regret her state university education. But could she convince another small-town kid, raised on persecution media, that she won’t be silenced for her views? I don’t know.