Sunday, November 30, 2025

Internet Censors and Real Speech

The cover art from Sharon Van Etten’s
Remind Me Tomorrow

I had no idea, until this week, that Sharon Van Etten’s folk-pop electronic album Remind Me Tomorrow might be off-color. Specifically, the cover art. I’ve linked to my review of the album several times on several platforms without incident. But this week, I had a link yanked from Instagram by the parent company, Meta, on the grounds of “child nudity.”

As you can see, the cover image is a childhood snapshot of Van Etten and her brother. That’s Van Etten half-folded into a laundry basket, partially unclothed. Small children often hate clothes, and have to be conditioned to wear them in time to start school. Because of this, most people recognize a categorical difference between innocent small-kid nakedness, and smut. I suspect any impartial judge would consider this the former.

This isn’t my first collision between Meta and nudity. I’ve repeatedly needed to appeal them blocking links because my essays included Michelangelo’s Creation of Adam, a panel from the Sistene Chapel ceiling. It depicts Adam, not yet alive, lolling naked in Eden, including his visible genitals. Nearly every blog essay I’ve written that included this image, I’ve had to appeal against lewdness regulations.

Any reasonable person would agree that social media needs basic standards of appropriate behavior. Without a clear, defined threshold, one or a few bad-faith actors could deluge the algorithm with garbage and destroy the common space. Consider the decline of public spaces like Times Square in the 1970s: if nobody defends common spaces, they become dumping grounds for the collective id.

But those standards are necessarily arbitrary. What constitutes offensive behavior? We get different answers if we ask people of different ages, regions, and backgrounds. My grandmother and I have different expectations; likewise, Inglewood, California, and Doddsville, Mississippi, have wildly divergent community standards. But because Facetube and InstaTwit don’t have geographic boundaries, they flatten distinctions of place, race, age, and economic standing.

TikTok perhaps embodies this best. Cutsie-poo euphemisms like “unalived,” “pew-pew,” and “grape” gained currency on TikTok, and have made it vitally difficult to discuss tender topics. YouTube restricts and demonetizes videos for even mentioning crime, death, and the Holocaust. Words like “fascism” and “murder” are the kiss of death. In an American society filthy with violence, the requirement to speak with baby talk circumspection means that we can’t communicate.

Michelangelo's The Creation of Adam, from the Sistene Chapel ceiling

Watching the contortions content creators have to perform whenever called upon to address the latest school shooting or overseas drone strike, would be hilarious, if it weren't heartbreaking. Americans have to contend with legislative inertia, lobbyist cash, and morally absolute thinking when these catastrophes occur. But then the media behemoths that carry the message have the ability, reminiscent of William Randolph Hearst, to kill stories by burying them.

I’m not the first to complain about this. I’ve read other critics who recommend just ignoring the restrictions, and writing forthrightly. Which sounds great, in theory. If censorious corporations punish writers for mentioning death and crime too directly, the response is to refuse to comply. Like any mass labor action, large numbers and persistence should amend the injustice.

In theory.

Practically speaking, media can throttle the message. In the heyday of labor struggles, the Ludlow Massacre and the Battle of Blair Mountain, unions could circumvent media bottlenecks by printing their own newspapers and writing their own folk songs. But most internet content creators lack the necessary skills to program their own social media platforms. Even if they could, they certainly can't afford valuable server space.

Thus, a few companies have immediate power to choke even slightly controversial messages, power that creators cannot resist. Which elicits the next question: if journalists, commentators, and bloggers cover a story, but Mark Zuckerberg and Elon Musk stifle the distribution, has the coverage actually happened? Who knows what crises currently fester unresolved because we can’t talk about them?

This isn’t a call to permit everything. Zuckerberg and Musk can’t permit smut on their platforms, or even link to it, because it coarsens and undercuts their business model. But current standards are so censoriously narrow that they kill important stories on the vine. If we can’t describe controversial issues using dictionary terms, our media renders us virtually mute.

Given how platforms screen even slightly dangerous topics and strangle stories in their beds, I’m curious whether anyone will even see this essay. I know I lack enough reach to start a movement. But if we can start speaking straightforwardly, without relying on juvenile euphemisms, that represents a step forward from where we stand right now.

Thursday, November 27, 2025

Andrew Tate, Master Poet

Back in the eldritch aeons of 1989, art photographer Andres Serrano gained notoriety for his picture “Piss Christ.” The image involved a crucifix with Jesus, shown through the glimmering distortion of an amber liquid, putative Serrano’s own urine. The controversy came primarily through Senator Jesse Helms (R-NC), who aspired to become America’s national guilty conscience. This outrage was especially specious because Helms only noticed the photo after it had been on display for two years.

I remembered Serrano’s most infamous work this week when “masculinity influencer” Andrew Tate posted the above comment on X, the everything app, this week. Tate is a lightning rod for controversy, and seems to revel in making critics loose their composure. Sienkiewicz and Marx would define Tate as a “troll,” a performance artist whose schtick involves provoking rational people to lose their cool and become angry. To the troll, the resulting meltdown counts as art.

Andres Serrano remains tight-lipped about his politics, and repeatedly assures tells that he has no manifesto. Following the “Piss Christ” controversy, he called himself a Christian, but this sounds about as plausible as Salman Rushdie calling himself Muslim after the Satanic Verses fatwa: that is, a flimsy rhetorical shield that convinces nobody and makes the artist look uncommitted. I think something else happened here, something Serrano didn’t want to explain; the image itself doesn't matter.

Specifically, I think Serrano created a cypher of art. Unlike, say, Leonardo’s “Last Supper,” Serrano’s picture doesn’t actually say anything. Instead, it stacks our loaded assumptions of religious imagery and bodily waste, and asks us what we see here. The image itself is purely ceremonial. Serrano cares more about why seeing the Christian image through urine is worse than seeing it through more spiritually anodyne fluids, like water or wine. Our answer is the art.

Critics like Helms, or let’s say “critics,” see art in representational terms. Art, to them, depicts something in the “real world.” This might mean a literal object, such as a fruit bowl in a still life, or an event or narrative, like the gospel story in “The Last Supper.” The representational mind seeks an artwork’s external, literal reference. This makes “Piss Christ” dangerous, because dousing the sacred image in something ritually unclean is necessarily blasphemous.

Progressive critics abandon such one-to-one representations. In viewing more contemporary art, from Serrano’s photos to Jackson Pollock’s frenetic, shapeless splatters, they don't ask themselves what object they ought to see. They ask themselves how the art changes the viewer. In the Renaissance, audiences assumed that art created a durable image of the transient, inconstant world. But artists today seek to amplify and hasten change. We viewers, not the world, are the purpose of contemporary art.

Ironically, as progressive critics tolerate more receptive non-representational standards in visual art, their expectations of language have become more exacting and literal. From religion to poetry to President Taco's id-driven rambles, they take words to mean only what they mean at surface level. Every online critic who considers it their job to identify “plot holes” in Disney’s Cars, or insist the Bible is disproven because we can’t find the Tower of Babel, makes this mistake.

At the surface level, Andrew Tate’s macho posturing seems like the opposite of art. His insistence on appearing constantly strong leaves no room for contemplative ruminating over language’s beauty or nuance. He doesn’t signpost his metaphors like Emily Dickinson, so it’s easy to assume he has no metaphors. Yet the weird prose poem above, with its apparent insistence that it’s now “gay” to be straight, defies literal scientific reading. By that standard, it’s pure poetry.

Tate seemingly contends that, in a world without obsolete gender and sexual designations, while nothing better takes their place, words become meaningless. If men feel sexually homeless nowadays, Tate lets us relax our burdens and shed our doubts. If words mean nothing, then words can’t control us. If it’s gay to be straight, then we can expunge archaic goals like love and stability. Yield to language’s poetic flow, let it transform you and be transformed by you.

This doesn't forgive Tate’s crass misogyny and weirdly self-destructive homoeroticism. He still treats women as ornaments and men as something to both desire and despise. As with any poet, it’s valid to say when something doesn’t land. (This one landed so badly that Tate eventually deleted it; only screenshots remain.) But we must critique it in its genre. Andrew Tate is a poet, not a journalist, and his words change us like art.

Thursday, November 20, 2025

Party Politics and the Art of Forgiveness

Rep. Marjorie Taylor Greene (R-GA)

This weekend, Congressman Jamie Raskin (D-MD) called the Democrats “a big tent…. that’s got room for Marjorie Taylor Greene.” This statement deserved the immediate blowback, as Greene’s history of race-baiting, antisemitism, and harassing school shooting survivors doesn’t just go away. But it exemplifies two problems with American politics. First, that our parties have been reduced to the Trump and anti-Trump parties, without underlying principles. Second, we keep steadily eroding the relationship between forgiveness and repentance.

Raskin’s invitation is only the latest Democratic effort to dilute their brand. The Democrats continue providing a nurturing cocoon to aggressive nationalists like Liz Cheney and Adam Kinzinger, simply because they personally refuse President Taco. Former Representative Joe Walsh, who holds truly noxious views, has become a resistance leader. Yet the party leadership, including Hakeem Jeffries and Chuck Schumer, still haven’t endorsed Zohran Mamdani, even after he won the party primary and the majority vote.

Throughout the last decade, we’ve watched America’s mainstream parties reorganize themselves around one man. Republicans, who had legitimate policies in the 1990s when I was one of them, have become the party that endorses whatever dribbles out of President Taco’s mouth. Those who disagree, the party deems “traitors.” Meanwhile, Democrats, once the party of Civil Rights and the New Deal, have jettisoned all principles to pursue whatever and whoever opposes this President. This isn’t sustainable.

To accomplish their agenda, Democrats have ushered their onetime opponents up the leadership ladder. Although professional pundits claimed Kamala Harris lost last year’s election behind issues like queer, trans, and racial rights, Harris actively avoided those issues. Instead, she spent the campaign’s final weeks touring as a double-act with Liz Cheney, whom observers have described as “arch-conservative.” Democrats have pivoted away from their base, including labor, minorities, and queer voters, to chase the ever-shifting center.

Democrats have made conservatives like Cheney, Walsh, and now Greene their preferred leaders, despite their voting base’s opposition. This rush to promote former enemies makes sense if, like I suspect many Democrats did, you read Clausewitz in high school, without prior context. Many military strategists contend that former enemies make the best allies. Which is probably true, if your only interest is winning. But because the Democratic base has certain principles, winning alone isn’t enough.

Rep. Jamie Raskin (D-MD)

“Forgiveness” has become the defining stain of contemporary American life. News reeks with commentators who demand forgiveness, not as a culmination of a penitent journey, but as a precondition. From ordinary criminals who want forgiveness without facing consequences, to widespread abuse in religious congregations, to loyalists eager to excuse treason, we’ve witnessed a reversal of the forgiveness process. It’s become something powerful people demand from their perches, not something the wronged offer from God-given mercy.

I can’t unpack the full underpinnings of forgiveness in 750 words. In brief, “forgiveness” is half of the continuum, one face of a coin. The other half is “repentance,” the process of taking account and changing one’s life. This isn’t merely verbal contrition, as I learned in White Protestant Sunday school. Repentance, or metanoia in Greek, means literally walking a new path. We know somebody’s repented, not by their words, but by their changed life.

Cheney, Walsh, and especially Greene have shown no inclination toward changed lives. Though Greene has verbally apologized for past violent rhetoric, observant critics claim her manner hasn’t shown signs of authenticity. More important, this change in Greene’s loyalties has happened too suddenly to reflect in her actions. Perhaps Greene has literally reversed herself, and she’ll demonstrate a more cooperative, nonviolent, and restrained manner. But it’s too soon to know whether her words match her actions.

Please don’t misunderstand me: verbal apologies matter. Humans are language-driven creatures, and speaking with one another is a necessary part of bond-building. But who among us hasn’t known somebody who says they’re sorry, while showing no acts of repentance? This may be innocent—small children think “I’m sorry” is a blanket ticket to forgiveness—or malicious—abusive spouses love voicing their regrets. But only when words and actions come together do they make a difference.

Part of repentance includes asking whether one will handle power better in the future. Current or former elected officials, including Greene, Walsh, and Cheney, want the Democratic Party to offer them unconditional leadership, like they had before. But from my vantage point, they’ve shown no signs that they’ll use that leadership to uplift the downtrodden, heal the hurting, or support large-D Democratic principles. They haven’t shown a new life, so they haven’t yet earned forgiveness.

Monday, November 17, 2025

In Search of Lost Adulthood


How do we define an adult? This question has surged in the last week, as Megyn Kelly defined fifteen-year-old girls as “barely legal” adults, mere days after online rumor-mongers redefined twenty-eight-year-old Rama Duwaji as a child. We saw this happen five years ago, when columnist Joseph Epstein called First Lady Dr. Jill Biden “kiddo,” just days after Donald Trump called his son Don Jr. “a good kid.” They were seventy and thirty-nine years old, respectively.

This problem recurs in America. Last month, J.D. Vance dismissed vile, racist comments from leading “Young Republicans,” ranging from their middle twenties to early forties, as “kids do stupid things.” But a decade ago, Connecticut schoolteacher David Olio lost his job for letting students read a sexually explicit poem in A.P. English, nominally a college-level course, because his students were “kids” and needed protection from adult sexual expression. Childhood excuses vast repressions in its defense.

Our society keeps moving the boundaries of adulthood further out. America has the world’s oldest legal drinking age, preventing youth from learning how to handle alcohol responsibly until they’re most of the way through college. An increasing number of upwardly mobile jobs require graduate degrees, keeping young adults from commencing their careers until they’re around thirty. The average first home purchase now happens around forty, keeping adults from building equity or developing rudimentary financial independence.

Every few years, Congress suggests adjusting the national retirement age. This probably makes sense to legislators, who are mostly lawyers and financiers, and can work as long as their brains remain active. But manual trades, like construction or manufacturing, erode your joints and tendons, so laborers get old faster than office workers. But for our purposes, keeping laborers working will prevent managers from retiring, keeping rank-and-file workers trapped in entry level positions for literal decades.

Perhaps worst of all, Western society overall no longer has clear adulthood rites. The rituals we Americans witness in travelogues like Roots are inspiring, and of course the mitzvah rituals of Judaism, and similar minority religions, still exist. But in the mainstream, ceremonies like baptism or marriage, or benchmarks like high school graduation, carry little weight anymore. With neither ritual nor financial independence, we no longer have any standards to objectively call someone an adult.


Because of these convergent forces, we see people performing the rituals of childhood well into physical maturity. Sometimes this influence is mainly turned inward. Incels and “masculinity influencers” like Andrew Tate perform peacocking displays of manhood that look like middle-grade boys flexing on the schoolyard. But we’re seeing more outward-facing, harmful displays, too: men like Bill Clinton and President Taco collecting sexual conquests like overgrown fraternity boys, leaving trails of scarred women in their wake.

Philosopher Alain Badiou writes that, in market-driven societies, men achieve adulthood by collecting the most toys. But as it now takes longer for youth to achieve financial independence, hoarding toys becomes prohibitively expensive. Therefore men adjust adulthood rituals to strength, dominion, and conquest. Who do they dominate and conquer? Women. Thus, as Badiou writes, men remain boys well into physical adulthood, while girls, to survive, become women at absurdly early ages. Just ask Megyn Kelly.

America’s shared definition of adulthood has become mushy and subjective because, in a society organized to protect capital, we turn humans into capital. Adulthood becomes contingent on economic productivity, freedom from parental support, and resources enough to have and raise children. Standards that many citizens don’t achieve until they’re approaching forty. To be enforceable, we need a standard age of majority: sixteen, or eighteen, or twenty-one. But for all practical purposes, these numbers mean nothing.

We’re witnessing a rare moment of bipartisan moral outrage over the continued lack of accountability for Jeffrey Epstein’s clients. And we should; very little encourages universal outrage as surely as child exploitation. But economic instability and job loss cause trauma as real as SA, if less visibly offensive. We’ve created a society where nearly everybody, in one way or another, is nursing the psychological scars of long-term trauma, and the people responsible suffer no consequences.

Within my lifetime, America has become a society comprised of traumatized children, trapped in cycles of learned helplessness, desperate for adult guidance. We only disagree about who, exactly, we consider a responsible adult. Does our society need a macho disciplinarian, a nurturing teacher, or some third option? Until we find a useful shared definition of adulthood, we’re all, in different ways, trapped at the level of dependent children, desperate for our lives to finally start.

Saturday, November 15, 2025

Megyn Kelly, “Kidult” Culture, and Me

When I was fifteen years old, I fell deeply in love with my geometry teacher. Let’s call her “Ms. Shimizu.” In my recollection, she was tall, thirtyish, and resplendent with confidence and grace. I would’ve gladly let her teach me the rudiments of grown-up romance. Ms. Shimizu possessed wisdom enough to strategically ignore my fumbling teenage flirtation. But she spoke to me like an adult, teaching me the respect I should expect to give and receive as an adult.

I remembered Ms. Shimizu this week, when former Fox News ingenue Megyn Kelly delivered her cack-handed and obnoxious defense of grown men chasing teenage girls. Kelly’s claim of a categorical difference between prepubescent children, and adolescent teens, provides just enough rhetorical coverage to justify those who already believe that. But it does nothing to address the question of whether teenagers can provide informed consent to adult sex.

My teenage infatuation with Ms. Shimizu taught me two tentpole principles of my philosophy of consent. First, yes, adolescents are sexual beings, and censorious adult efforts to squelch teen sexuality have maladaptive consequences later in adulthood. But second, adolescents don’t have sex for the same reasons adults do. Teenagers have sex for the reason grade-schoolers run and jump and scream constantly: they’re learning to control their rapidly changing bodies.

This carries an important corollary: I can only imagine three reasons adults would pursue sexual relationships with teenagers. Either they’ve forgotten the different reasons adults and teens have sex; or they’re enacting their own arrested adolescent psychosexual development; or they’re simply bad people. Each of these reasons carries its own appropriate response: education, treatment, or punishment. But fabricating excuses, as Kelly did, only makes other adults complicit.

Mass media in the last twenty-five years exacerbates this tendency. (Maybe longer, but that’s when I noticed it.) Prime-time dramas like The O.C., Gossip Girl, or Pretty Little Liars feature sexually precocious “kidults” who often pursue relationships with surrounding adults, including teachers. I still cringe at Veronica Mars, whose teenage protagonist dated an adult cop. Though targeted at teenaged and twenty-something audiences, these shows have significant adult viewership.

These are mass media caricatures, sure, but as our economy allows adults fewer opportunities to make friends their own age, it’s easy to forget the distinction. Age-inappropriate relationships reflect a common adolescent desire: many of us thought ourselves unfairly circumscribed by social standards. And many of us wanted an adult mentor to teach us the ways of adulthood, skipping the fumbling experimentation we needed to understand ourselves.

If the phrase “DNA evidence” took human form, it might look like these guys

Furthermore, these shows distort adult perceptions of what teenagers even are. Labor laws and the vicissitudes of puberty make real teenagers difficult to work with, so most mass media teenagers are played by actors in their twenties. Except for parents or working teachers, most adults have limited opportunities to even see teenagers regularly, and as we drift further from our own teens—when we considered ourselves very mature—we think TV teens are realistic.

Our society produces two equally deleterious responses to adolescent sexuality. Conservative parents advocate for “purity culture” and abstinence-only sex education. These movements keep teenagers swaddled in childhood innocence for years, then dump them on adulthood’s doorstep catastrophically unprepared. More progressive parents take a permissive hand, if not outright encouraging adolescent sexuality, at least providing insufficient adult guidance for making good choices.

Then there’s the third option. Jeffrey Epstein is perhaps an extreme example of adult exploitation, an attempt to commodify teenagers’ sexual inexperience. But almost every teenage girl, and no small fraction of teenage boys, has the experience of being propositioned by adults who see adolescents’ bodies as something to consume. When youths are mature enough to have sex, but not experienced enough to understand sex, they exist in a precarious balance.

Bill Clinton and President Taco may be extreme examples, insulated from consequences for years by power and money. But even before the Epstein revelations, both men were famed for their voracious sexual appetites, both seeing women not as fully developed humans, but as vessels for male gratification. Both men, born to absentee fathers, pursued wealth, power, and the attendant sexual attention, as shields to protect the festering wounds in their souls.

All this is to say, I understand Megyn Kelly’s intent; but she’s still wrong. It’s possible to acknowledge teenagers as sexual beings, and respect their arc of self-discovery, without throwing them to the ravening appetites of dangerous or damaged adults. If we don’t provide the guidance and defense they need, then we’ve failed an entire generation.



On a related topic: Are Age Gaps the New Scarlet Letter?

Thursday, November 13, 2025

Breaking Rust: the Final Boss of Country Music

In my father’s household, the only music was country music. And the only country music was honky-tonk. Lefty Frizell, Patsy Cline, and especially Old Hank remained the only benchmark of artistic accomplishment. Dad would permit newer, slightly innovative artists like Alabama or the Oak Ridge Boys into the house, but for all practical purposes, Dad believed good music ended when the Statler Brothers stopped having chart hits.

I remembered Dad’s rock-ribbed loyalty to old-school authenticity this week, when a generative AI song hit number one on Billboard’s Country Digital Song Sales chart. Breaking Rust appears to be an art project combining nostalgic country sounds with still images; music journalist Billy Dukes tried, without success, to track the originating artist. Critics, journalist, and armchair philosophers are debating what Breaking Rust means for future commercial music.

“Walk My Walk” is a pleasant, but undistinguished, Nashville hat-act song. Rhythmically, it’s a prison work song, and the lyrics tap into a White working-class motif of “hard livin’ won’t break me.” The digitally generated voice has a deep bass rumble with a nonspecific Southern accent, a sort of dollar-store Johnny Cash. Its message of facing poverty and setbacks with dignity and grace goes back decades in country music.

We shouldn’t over-interpret the track’s success. It topped a relatively obscure chart, not Country Airplay or Hot Country Songs. Because it exists wholly digitally, and the artist is apparently unsigned to a Big Three label, Breaking Rust’s success probably reflects the Spotify algorithm. It bears repeating, as Giblin and Doctorow write, that half of all music in America gets heard through Spotify, giving one algorithm extreme control of American taste.

That’s saying nothing about whether number one means anything anymore. I’ve written previously that, far from reflecting public taste or artistic merit, as it supposedly did during the heyday of Elvis or the Beatles, Billboard now often reserves the top slot for the blandest, most committedly inoffensive tracks. Far from the best song out there, today’s number one is the song most conducive to playing as background music while studying or commuting.

Breaking Rust is the smoothest puree of recent country music, with its sound, its lyrical themes, and even mid-tempo walking rhythm. I fear I’m repeating myself, as I’ve recently written something similar about AI-generated television. But I’ll repeat myself: AI “art” inevitably results in what mathematicians call “regression toward the mean.” That is, the larger your core sample, the closer your product falls toward dead center.

The AI-generated portrait of Breaking Rust: the ultimate banal Marlboro Man

Yet for algorithm-driven entertainment businesses, regression isn’t a flaw. Country music itself proves this. The genre was founded by hardscrabble people from poor backgrounds, artists who lived the life they described. Hank and Lefty, Jimmie Rodgers and the original Carter Family, were working people who recorded the songs of their friends and communities. Rodgers recorded his final songs literally on his deathbed, because his railroad work had destroyed his lungs.

But around 1963, when the British Invasion caused a sea change in rock music, several former rockers reinvented themselves as country singers. Johnny Cash, Conway Twitty, Jerry Lee Lewis, and even Elvis tapped into the twangy side of their Deep South artistic influences and became country musicians. But in changing their style, they also changed their market: their songs became much more past-oriented and nostalgic.

Growing up, I listened to songs like Cash’s “Tennessee Flat Top Box,” the Statlers’ “Do You Remember These,” and Johnny Lee’s “Cherokee Fiddle,” all of which yearned for a past uncluttered by responsibility, noise, and haste. Country music’s founders sang about working-class rural living—and, in the alt-country enclaves, many still do. But after 1963, naïve yearning for bygone simplicity dominated the radio mainstream.

Even before generative AI, the system reinforced itself. Longing for a simplified past became the soil where new artists grew. Neotraditionalists like Dwight Yoakam, Mary Chapin Carpenter, and Toby Keith built careers around pretending to be older than they were, and wishing they lived in a countrified candyland that only existed in songwriters’ imaginations. Much as I love Johnny and Dwight, their nostalgia became a trap I needed to escape.

In that regard, Breaking Rust is the inevitable outcome. Country music, which began life as White soul music, has been a nostalgia factory since before I was born. Moving out of my father’s house, I had to unlearn the genre’s myths of honest work, stoicism, and White hardship. Other fans chose not to unlearn it, and Breaking Rust has doubled down, becoming the perfect recursive loop of self-inflicted White persecution.

Monday, November 10, 2025

Are Age Gaps the New Scarlet Letter?

Mayor-elect Zohran Mamdani (left) and his wife, Rama Duwaji (stock photo)

Let’s start with this reality: no serious commentator cares about the six-year age gap between Mayor-elect Zohran Mamdani and his wife, Rama Duwaji. The objections, which gained brief modish attention on social media last week, appear largely invented by a pseudonymous columnist for online gossip rag Nicki Swift. Yet it bothers me because it reflects a larger pattern I’ve seen developing in online sexual discourse.

The columnist claims Mamdani’s “marriage is full of glaringly obvious red flags,” but lists only two: the fact that Mamdani and Duwaji met on a dating app, and their age difference. Considering that apps have become perhaps the most common way couples meet, supplanting the previous favorite, introduced through mutual friends, this complaint seems disingenuous. But it also bears mention, because it bespeaks the author’s preconceptions.

Strictly anecdotally, I’ve observed a rising tide of moral panic around sexual mores. While American society has become increasingly accepting of the alphabet soup of Queer identities, we’ve become more puritanical, more prescriptive, about others. Age gaps in particular cause many armchair commentators extreme panic. Again, I’ve observed this purely through anecdote, but the paranoia about “groomers” and age-based abuse has become volatile.

Some weeks ago, another pseudonymous poster asked readers to settle a debate with her husband. A couple of their acquaintance had an age gap: he was 40, she was 30. The poster said her husband claimed this made the man in that relationship a “pedophile.” That reminded me of an older post I’d read, a woman claiming her girlfriends had called her boyfriend a “groomer” because he was 37, and she was 30.

Words like “pedophile” and “groomer” have specific meanings. In strict psychological language, a pedophile has a pathological desire to have sex with a partner who hasn’t yet experienced puberty. More generally, we use the word to describe someone who desires to have sex with a partner deemed too young to consent, regardless of biological maturity. And a groomer systematically coaches someone underage to regard “bad touch” as acceptable behavior.

These accusations share one theme: the capacity to consent. We deem minors to lack the moral capacity to consent to sex, even when their bodies are developed enough to have sex. And we deem minors’ moral development to be malleable, subject to being distorted by malignant influence. These considerations are real. Children, by dint of being children, don’t understand what it means to consent, especially to somebody who’s charming, influential, or dangerous.

young couple on a date

My problem arises when we broaden the definition of minority. When we accuse somebody of “pedophilia” for having a relationship with a 30-year-old woman, we’re declaring that grown women are incapable of making moral decisions. We literally infantilize adult women, reducing them to the moral incapacity of a child. We take a serious concern, that manipulative adults will abuse children, and apply that in ways it doesn’t apply.

Our Nicki Swift columnist quotes a Twitter user saying, of the Mamdani marriage: “When he was 18, she would've been 12. Can I get a YIKES??” I respond with the counterquestion: so what? Had they known one another at that age, it would’ve been skeevy, and potentially actionable, depending on their relationship. But they didn’t, so the comparison is immaterial. They didn’t meet until their twenties, when both were adults.

The reasonable desire to protect children from predatory adults, has morphed into a belief that we must protect adults from each other. In so doing, we reduce adults, mostly grown women, to the stunted moral agency of overgrown adolescents. Please don’t misunderstand: I say “we” do this, but it’s in the hands of online moral scolds who mean well, but reduced the issue into caricature.

British sociologist Stanley Cohen coined the term “moral panic” to describe this distortion. A social concern, usually grounded in a kernel of truth, confounds people with media access. But those people perceive the problem as more pervasive than it actually is, and sell their audience on the imminent crisis. The Satanic Panic of the 1980s is a good example. So is QAnon, the accusation of sexual predation among America’s elites.

Social media puts more people in positions to start, or amplify, moral panics. That’s what the Nicki Swift article does: it takes a realistic concern about sexual abuse, and applies it in a slapdash way. But social media also gives more people the opportunity to stand in the way of cascading panic. That’s the point we’re at now: we need to refuse to lose our composure about made-up problems.

Tuesday, November 4, 2025

The First and Last Days of AI TV

When this screencap of a hypothetical AI-generated sitcom hit social media, critics focused, perhaps not unfairly, on the phantom hand holding the floating guitar. Notice also that it mashes Penny’s kitchen together with Monica Geller’s living room, or that some pieces of furniture are vanishing into other pieces, Beetlejuice-style. Or that both women are clearly Alyson Hannigan circa 2010. Visually, it’s a mess.

The source video was yanked shortly after posting, so heaven only knows whether this “sitcom” is funny. But how many of them are? Based on this orphaned screencap, it looks like an unholy collaboration between Chuck Lorre and Hieronymus Bosch. Worse, it looks like an inbred bastard child of every White sitcom of the last thirty years. Which explains why the poster, a generative AI enthusiast, thinks it’s great.

It also explains why, if programmers could successfully remove the psychedelic artifacts, networks will likely buy it. Since Friends debuted in 1995, most American sitcoms have featured some variation on its beat sheet. How I Met Your Mother, The Big Bang Theory, Everybody Loves Raymond, and Community have been largely reskinned variations on Friends. But even Friends itself simply scrubbed the Blackness from Living Single and resold it.

Generative AI consumes the largest possible sample of existing media, whether images or text, and regurgitates something passably similar. The law of averages foretells that, the larger our core sample, the more the product will drift toward the middle. Thus, an AI enthusiast will regard the blandest, most derivative product as a triumph of the art form. We know it works because it resembles everything we’ve seen before.

Unfortunately, the network commissioners who buy content will probably also love it. I’ve written before, primarily about music, how the corporate conglomerates that control our media have become risk-averse. They literally have analytic software and focus groups, modeled on the latest behavioral economics, to reassure shareholders that all new content sufficiently resembles everything that’s been successful before.

We could argue that this happens because corporate overlords see their product not as art, but as content. It also reflects chokepoint capitalism, since all music, to succeed, must pass Spotify’s filters, and all TV ultimately heads for Hulu, Netflix, or Disney+. But it also reflects changes in consumption. The streaming services that control our viewing and listening require massive audiences to keep subscription prices low, and 🌟art🌟 can’t do that.

Gone are the days of subcultures and audience segmentation. Streaming services generate audiences, not by creating something innovative or uplifting, but by remaining minimally offensive. Admittedly, this isn’t new in mass media, as network TV’s largely interchangeable Westerns of the 1960s segued into spy thrillers in the 1970s. I’ve already commented on how Friends’ cinematic, character-driven style displaced Norman Lear-style three-camera sitcoms anchored to bankable stars.

Generative AI will reduce this trend to silliness. The nameless spec clip memorialized above shows how the system reduces storytelling so thoroughly, it pinches the same actress twice. If the existing model reduces BIPOC actors to contract players holding supporting parts, generative AI will eliminate them completely. One needn’t ponder hard how it will treat disabled actors, or even capable but slightly funny-looking actors. Like me.

Children and the aged are also underrepresented on existing TV, and often reduced to stereotyped, moralistic roles. Aging actors already struggle to get parts, especially if they don’t have established star power—and that goes double for women. If existing sample TV shows and movies have few children or older adults, the middle-of-the-road produce will inevitably have even fewer, a trend likely to only compound over time.

I don’t want to romanticize the past, especially the recent past. Disney currently shepherds talent like Ariana Grande or Olivia Rodrigo from childhood, molding them according to data and metrics, creating a media landscape already struggling with anodyne blandness. Constant remakes of lucrative IPs like Dune, Harry Potter, Gossip Girl, and Stephen King, reveals corporate leadership paralyzed with fear at even the most rudimentary innovation.

AI is thus not a declension, but an extension of already existing trends toward the mean. It accelerates corporate media’s desire to produce the blandest, most studiously inoffensive oatmeal. In the constant struggle between art and commerce, it supports the bean counters. By controlling the kinds of stories and faces available to viewers, it promises to continue starving the public imagination at a pace once unimaginable.

In short, it helps nobody but the shareholders. One wonders what will help them, when we shut off the TV and read a book.

Saturday, November 1, 2025

SNAP Disaster and Our Un-Free Market

By the time you read this, you’ll already know whether America’s federal government has suspended SNAP (“food stamp”) benefits, or whether President TACO has done what he regularly does, reversing himself at the last minute. This of course matters for millions of American families. But in the long term, I question whether the event itself matters more than the systematic weaknesses it exposes in the American economy.

Walmart, America’s largest brick-and-mortar retailer, receives almost a quarter of all SNAP benefit spending, according to Axios. This reflects multiple factors: for instance, that Walmart exists in all fifty states and most territories. But it also reflects their decision to place themselves in chronically underserved neighborhoods and communities. Walmart has a near-monopoly stranglehold on many areas’ food supply, and they use it.

I can’t be the only news junkie who remembers the Great EBT bollix of 2013. I’ve already written about the event itself, and the moralistic outrage it precipitated online. The ways in which Walmart has no only gobbled up its captive market, but has become the necessary go-to for its market’s rush buying, reflects that America’s consumer economy is not free. The corporation has established permanent colonies in its market’s collective imagination.

This didn’t just happen. Informed critics have described how Walmart utilizes public subsidies to corner markets. Their government largesse often totals in the billions of dollars, and includes many of their own employees relying on SNAP benefits. This isn’t news; I discovered it in the 1990s. Economist John C. Médaille even asserts that even the interstate highway system functions as a subsidy, as Walmart’s just-in-time restocking method depends on reliable, wide roads.

Nominally, Walmart remains a private entity controlled by shareholders, mostly the Walton family. But its success has relied heavily on public-private partnerships and government welfare. Given this symbiosis, and the corporation’s ability to manipulate Defense Department levels of money, it bears asking when Walmart, and corporations of its size, stop being private companies, and become de facto shadow government bureaus.

Current events, however, reveal how precarious that market stranglehold really is. The threatened SNAP shutdown has caused fears of Walmart riots—not from the company itself, which rejects the rumors, but from ordinary customers repeating scuttlebutt online. If Walmart controls a neighborhood’s access to food, clothing, and domestic goods, then cutting off that access doesn’t just hurt the community. It also radicalizes its members to demand the goods hidden behind the wall.

Nor do I use the word “radicalize” lightly. The French Revolution happened amid a combination of massive imperial wealth, and unequal distribution, to the point where Jean Valjean’s loaf of bread has become iconic. Argentina came close to a populist revolution in 2001 because people couldn’t afford groceries, although groceries remained available. Argentina kept exporting food, especially beef, during its economic nadir; regular Argentinians just couldn’t afford it.

Appallingly similar conditions exist now. Elon Musk is currently tracking to become the world’s first trillionaire, money he can never possibly spend. Meanwhile, Walmart, as the largest SNAP vendor, has food available to sell, but SNAP recipients, most of whom are employed, simply can’t afford to buy it. If working-class Americans go hungry long enough, one wonders whether we might, for the first time since Bacon’s Rebellion, develop a sense of class solidarity.

Before continuing, I’ll acknowledge I’m slightly contradicting my past self. I recently made excuses for the influence which dollar stores have on America’s rural communities, saying that they create a nascent sense of economic self-determination. Of course, Dollar General has an almost Walmart-like grip on many communities, and they’re as vulnerable to a SNAP shutdown as any other monopolistic retailer.

Yet I stand by my prior opinion. I admitted that Dollar General is imperfect, and called it a transitional stage for rural economics. By teaching country dwellers that they, too, deserve nice things, I’d argue they also teach country dwellers to join forces to demand nice things. I still believe that’s true, and it encourages small town people, who often swallow a “rugged individualism” economic pill, to see their situation as collective, not atomized.

These events reveal that Walmart, Dollar General, and other near-monopolies rely upon government support on both the supply and the demand ends. A market economy in a massively technological society requires a stabilizing hand that remains responsive to the populace, not the plutocrats. We may reorganize that stabilizing hand, but if it does the job of a government, then that’s what it is. Modernity demands, not negates, a strong state.

Monday, October 27, 2025

Secrets Buried in the World’s Darkest Corners

T. Kingfisher, What Feasts at Night

A wandering soldier returns to the ancestral hunting lodge deep within the Balkan mountains, finding it fallen into disrepair. Alex Easton, a haunted veteran who sacrificed ancestry, home, and even gender to fight for their country, has lost everything except their name and this house. But with no caretaker keeping the house’s dark spirits at bay, something stirs deep within the walls. A shameful past is now walking abroad.

This second novella in Kingfisher’s “Sworn Soldier” series is definitely a bridge volume. The first novella focused on Easton’s struggle to understand what happened at the Usher estate—if you missed it, Volume One retold Poe’s “Fall of the House of Usher.” This one, which doesn’t appear to be based on existing literature, is more driven by character, and by Easton’s inability to grapple with their wartime past.

The village of Wolf’s Ear, at the base of the mountain which Easton’s lodge bestrides, wants to traffic with Easton or the lodge, but nobody will say why. The superstitious, but largely anonymous, population, resembles the massed villagers in Young Frankenstein, terrified of an ancestral curse they can’t actually describe anymore. Easton struggles for explanations, even as a terrifying woman begins stalking the old soldier’s waking dreams.

Though Kingfisher mentioned Easton’s wartime history in the previous volume, she didn’t much go into it. This time, Easton, our first-person narrator, describes more of their autobiography, which mixes trauma and boredom in equal measure. The war left Easton with persistent nightmares and a lingering paranoia, but Easton’s batman, Angus, repeatedly reminds Easton that it isn’t paranoia if monsters really are chasing you.

Kingfisher also delves more into the history and culture of her fictional nation of Gallacia. She depicts a country which has been slowly, persistently sapped of joy, not only by its ancestral enemies, but by its own loaded history. Easton, with their ungendered name, has chosen a life of sexless gallantry, but found themself sucked into old grudges and ancient wounds. Those old metaphors become literal in the events of this story.

T. Kingfisher (a known and public
pseudonym for Ursula Vernon)

We witness a marked tonal shift from the first volume. In that book, the monster Easton confronts has physical mass, and can be defeated through strength and intellect. This story leaves the scientific elements in favor of folk horror and the supernatural. Easton, Angus, and their supporting cast have to confront the lingering shadows of a medieval past that their ancestors already killed, but which can never die.

This results in a looser, more introspective story than the previous one. Where Easton found themself carried along by Poe’s plot before, this time, Easton must dig into their own trauma and their people’s collective guilt. This means much longer passages of autobiographical rumination than before, passages that feel slow early on because they’re setting readers up for a more substantive reveal later on.

Perhaps most notably, this second volume isn’t freestanding. In contemporary genre fiction, many authors use one of two patterns in series fiction. Either the story is actually a single narrative broken into separate volumes, or each story is independent and individually articulated, letting readers jump into the story already in progress. Kingfisher doesn’t do that. Instead, this is a separate story, but relies upon exposition she provided in the previous volume.

Readers might notice that I’ve said little about the story’s plot, or the monster Easton confronts. Indeed, Kingfisher herself withholds the monster fairly late too (comparatively speaking: the story runs under 150 loosely-spaced pages). In essence, this isn’t a story about a definable monster, like the previous volume was; it’s about Easton’s inability to confront their own demons, and those of their country, until forced into a corner.

Maybe this sounds like damning with feint praise. I’ll acknowledge, readers shouldn’t dive into this one cold; Easton’s introspective ruminations will probably feel long, maudlin, and rootless to anyone who didn’t see their previous accomplishments. But for those who have, this novella will provide the backstory and culture that Kingfisher elided in the prior, plot-centric volume. This is a book for readers who enjoyed Volume One, not intended for newcomers.

Again, this is clearly a bridge narrative, building into another story—Volume Three dropped while I was reading this one. Kingfisher presumably intends to establish Easton’s character for another, perhaps more substantial, confrontation in a pending story. Because I read and enjoyed Volume One, I also found plenty to love here, but only because I’m already invested in Kingfisher’s character and setting. This is a book for fans, not beginners.

Saturday, October 25, 2025

The East Wing and America’s Disposable Economy

An NBC News photo shows the East Wing demolition in process this week

Although a majority of Americans reportedly disapprove of this week’s demolition of the East Wing, exact disapproval follows largely partisan lines. The White House extension, first built in 1902, stood between the president and his long-sought “ballroom,” and approval of the new replacement largely tracks with personal support for the president. Less interesting than exact details to me, though, is the underlying beliefs exposed by the demolition.

I cannot help perceiving the president’s actions through my seven years in the construction industry. Unlike the president, who comes from the moneyed, white-collar end of property development, I worked the jobsites, cordless impact driver in hand, tape measure hanging off my belt. The president works with schematic drawings, models, and CAD renderings. For each of us, the other’s perspective is an abstraction, separate from our particular job.

To architects, contractors, and project managers, only the finished product matters. The process is as ephemeral as pixie dust. Management wants to complete construction on time and under budget; everything else serves that goal. Existing structures, built with outdated technology and, probably, with hand tools, are an impediment. To the property developer, historic preservation is a costly nuisance that impedes progress, while demolition is cost-efficient.

Historic preservation is, ultimately, an expression of sentimentality. The Democratic-aligned objection to the East Wing demolition has focused heavily on historic photographs taken among the picturesque colonnade, or memories of the officials—especially First Ladies—who conducted White House business in those offices. They revere the now-lost East Wing for the same reason you’d rather see the church where your parents were married be restored than razed.

But American economics doesn’t place dollar values on sentiment. The president ran all three of his campaigns on rhetoric of cost efficiency, profitability, and business acumen. The philosophy of shareholder value, which has dominated American economics since at least the 1970s, sees ordinary people not as moral agents, but as arbiters of price. Historic buildings are worth exactly as much as one can outbid the developer who’d rather flatten it.

The president with one of the hand-sized architectural models of his grandiose
monuments, which he loves waving around for journalists

My first construction industry job involved building a new city high school to replace the nearly seventy-year-old existing building. The city’s cost-benefit analysis deemed the existing building too expensive to save. This decision distressed many people who’d lived in town their entire lives, and had deep-seated formative memories associated with the existing building. But their feelings weren’t sufficient to absorb the tax bill necessary for costly restorations.

Many public and municipal buildings, including schools, government offices, and universities, still have exterior facades built from slab marble, sandstone, and other valuable materials worth preserving. But those are merely cosmetic. The rise of steel-frame architecture means that a small number of girders carry the building’s weight. Interiors are built from tin studs, gypsum drywall, and poured concrete, materials with a maximum life expectancy of about fifty years.

Commercial buildings and houses are even worse. Most such buildings constructed after the Eisenhower Administration need significant restoration after around thirty years, because cinder block masonry, OSB underfloors, and aluminum HVAC systems, will just rot. Several American suburbs, once desirable and exclusive, are now saddled with the blight of abandoned malls and big-box retail stores: too decrepit to use, too expensive to restore, and too dangerous to demolish.

Nor is this accidental. Since the Levittown building boom of the 1950s, architecture’s core principles, as a field, have shifted away from durability, onto cost reduction. The White House’s facade hasn’t been much amended since George Washington laid the cornerstone in 1793. My city’s former high school, by contrast, decayed so spectacularly that I broke pieces off the seventy-year-old face by kicking them with a steel-toed boot.

Our president claims the East Wing demolition became necessary after discussion with architects, but I don’t believe that. After seven years in the industry, I know that, to the bean counters who control modern design and construction, demolition is always the first choice. Unless government agencies or private conservationists step in to prevent it, contractors regard old buildings as a private cost, not a community asset.

To America’s capitalist class, everything is as disposable as an aluminum soda-pop can. People like our president measure buildings’ value not by their historic legacy, but by their cost efficiency. The East Wing was worth less, to him, than whatever supposed value he can extract from his extravagant “ballroom.” The same will apply, eventually, to your historic downtown, your local schools, or your house.

Nothing, anywhere, is safe from the bulldozers and grandiosity of America’s ruling class.

Thursday, October 16, 2025

In Praise(ish) of Dollar Stores

The Dollar General I've grown reliant on, in a Nebraska town of only 1000 people

We’ve all heard the ubiquitous complaints about Dollar General and similar “dollar stores,” though that name is increasingly anachronistic. They keep prices low by paying workers poorly, running perpetually short-staffed, and excluding local artisans. Their modular architecture is deaf to local culture and design. They distort our ideas of what commercial goods actually cost. Even John Oliver dedicated a block of valuable HBO time to disparaging how Dollar General hurts workers and the local community.

Like a good economic progressive, I internalized these arguments for years. I prided myself on avoiding dollar stores like plague pits. I aggressively disparaged when a treasured local business got flattened to build a Family Dollar store. Even on an extended Missouri holiday, fifteen minutes from the nearest grocery store, I finally gave in and entered the Dollar General, only two minutes away, I still rationalized myself by saying I remained dedicated to local businesses.

Through the last year, though, I’ve found my perspective shifting. I find dollar stores, and Dollar General specifically, more necessary than I previously realized. Dollar General is like audiobooks, or a Slanket. These products, invented to streamline disabled people’s lives, have gotten derided by able-bodied elites, who don’t realize how privileged their taunts really are. Likewise, Manhattan-based John Oliver, and writers in coastal California, might not realize how dollar stores improve life in rural America.

In the year I’ve spent caring for an aged relative in rural Nebraska, I’ve become deeply reliant on Dollar General. It keeps longer hours than the local grocery store, auto parts retailer, or pharmacy. This has made it absolutely essential for buying convenience foods, over-the-counter meds, and motor oil. That’s saying nothing of other products that nobody else sells locally, like home décor, kitchen supplies, and paperback books. Without Dollar General, these commodities would disappear.

Not that semi-luxury commodities don’t exist in rural areas. But without Dollar General, the nearest big-box retailer selling these products would be over an hour’s drive away, or else Amazon, which delivers to rural areas only sluggishly. The stereotyped image of rural America, with its drab houses, faded curtains, and faded clothes bespeaks the ways retailers don’t bother investing outside already-lucrative markets. Dollar General, by contrast, arguably creates lucrative markets in marginal or abandoned areas.

The small but diverse produce section in my local Dollar General

City slickers might not realize how inaccessible food is in rural areas. The term “food deserts” often describes urban cores, especially non-White neighborhoods, where Dr. King realized that fruits and vegetables were unaffordable, if available. But rural America is also heavily food desertified. Because economic forces, especially banks, force farmers to abandon diverse agriculture for industrial monocropping, farmers seldom eat their produce. In the northern Great Plains, farms mostly grow livestock feed, not human food.

These conditions didn’t just happen. Macroeconomics isn’t inevitable, like rain. Top-level American economic policy flooded America’s central corridor with population after the Civil War, via the Homestead Act. But into the Twentieth Century, that same economic policy largely abandoned the homesteader population in favor of urban industrialization. America still needs its rural agricultural population; it just doesn’t provide that population with meaningful support anymore, since they aren’t lucrative donors. Nobody ever got rich hoeing corn.

Dollar stores are the logical market-driven response to this abandonment. Rural communities have some money, and they want—and deserve—nice things, like attractive curtains, affordable art supplies, and small electronics. Dollar General recognized an unmet market, and met it. Sure, they manipulate wholesalers and use just-in-time restocking to keep prices artificially cheap, in ways unsupported local businesses just can’t. But they aren’t culpable for economic policies that made Pop’s old-timey general store fiscally unviable.

Even their notoriously impersonal architecture reflects America’s top-level economic policy. Sure, I’d love if dollar stores hired local architects to conform their buildings to regional aesthetics. But Walmart and Target already priced architects out of small markets, and America’s rural economic abandonment means the dominant design style is often “decay.” Dollar General’s warehouse-like design and modular construction let them move into marginal markets quickly without incurring high amortized overhead, which just makes good business sense.

Don’t misunderstand me: dollar chains’ low pay, homogenous product, and deafness to local industry aren’t sustainable. I’ve grown fond of Dollar General, but at best, they’re a transitional response to larger forces. But John Oliver’s implicit expectation of shoving a Whole Foods into every small market is equally unrealistic. If we use the market access which dollar stores provide to build toward something better, then these chains have helped America through its current economic malaise.

Monday, October 6, 2025

To Be Young Is To Know Solitude

1001 Books To Read Before Your Kindle Battery Dies, Part 121
S.E. Hinton, The Outsiders

Ponyboy Curtis never wanted to join a street gang and fight with switchblades, but survival made it necessary. The streets are divided between the working-class Greasers, who pride themselves on their swagger and their lustrous hair, and the Socs, who drive flash cars and wear the slickest clothes money can buy. The groups fight, not because they have personal animosity, but because it’s what they do. Existential boredom leaves them with little besides the fight.

S.E. Hinton’s debut novel, written when she was still in high school, is sometimes credited as the beginning of the young adult genre. Other authors had written for teenaged readers before, but Hinton took an unprecedented tack. She wrote a teenager’s story of conflict and incipient adulthood, not for finger-wagging moralistic purposes, but simply because it’s his story. Hinton refuses to pass judgement, even when Ponyboy, her first-person narrator, spirals into self-recrimination.

The Greasers, by definition, have nothing. Ponyboy is an orphan, raised by his eldest brother, who’s forced to become a parent to teens at only twenty. His middle brother dropped out to get a job, basically because that’s what middle kids do. Other greasers dodge drunken parents, or practice fights in city parks, simply to pass the time. Ponyboy admits he doesn’t like most of them, but calls them his friends, because they can rely only upon one another.

Hinton putatively began writing this novel because a high school friend received a vicious beat-down, simply for walking unaccompanied. This event, and the trauma it caused not only him but everyone who loved him, becomes the inciting incident of the novel. Her feuding gangs hate one another without knowing one another, and fight because it gives their otherwise shapeless lives meaning. Hinton implies the battles would stop if participants simply spoke to one another.

One evening at the drive-in, Ponyboy and his friends encounter some well-scrubbed, middle-class girls. These girls rebuff the more aggressive Greasers, but one of them finds Ponyboy, with his big eyes and poetic soul, interesting. She wants to learn how the other half lives. But since Greasers and Socs never talk, this innocent encounter gets quickly misconstrued. An argument turns into a fight, turns into a knifing. Ponyboy flees a manslaughter accusation.

S.E. Hinton

Hinton never gives specific dates, and few places. Her gang of Greasers prefers Elvis, while the Socs favor the Beatles, which gives an approximate time. And her descriptions of dusty city streets, high-school rodeos, and rolling country hills locate the story in the southern Great Plains. Observant readers will recognize Hinton’s native Tulsa, Oklahoma. Which leads to an important question: is being rich in America’s despised hinterland any better than being poor?

The entire novel asks how an innocent, poetic teenager would handle everything that could go wrong in life, going wrong in quick succession. As the youngest Greaser, at only fourteen, Ponyboy is unprepared for battles against older, larger boys. When one battle leaves a Soc dead, he’s unprepared for the fugitive life. Isolation forces him into soul-searching that most boys don’t face until much later. Even soul-searching uncovers some conclusions he can’t yet handle.

Ponyboy and his friends find themselves in a no-win situation. If they flee their crimes, they’ll live as fugitives forever, with nothing to show for lives that have barely begun. But if they take accountability, they’ll face a criminal justice system that, they already understand, is slanted against poor, long-haired teenagers, and they’ll still lose everything. They find themselves forced into a world where choices lack the moral clarity of children’s stories and simple fables they learned in school.

Perhaps more than the story itself, Hinton’s narrative clarity differs from her contemporaries. Other youth narrators, like Scout Finch and Holden Caulfield, aren’t really children, they’re adults remembering childhood from their Olympus-like perch. Ponyboy is a real kid, struggling to come to grips with the adult responsibilities thrust upon him. He lacks mature guidance, only advice from other kids trapped in these circumstances with him. He survives, not because adults give him easy answers, but because he keeps moving when everything around him collapses.

At only 180 pages, written in a conversational tone, this book isn’t difficult reading. Its intended high-school audience will read it quickly, but they’ll also find themselves confronted with questions they can’t put aside nearly so easily. Adult readers will struggle with many of the novel’s themes of existentialism, purpose, and identity. The deep-seated social dislocation which Hinton identified in post-WWII America haven’t been resolved over sixty years later.

Friday, September 19, 2025

Crime News is Bad News

Alec Karakatsanis, Copaganda: How Police and the Media Manipulate Our News

I’d bet you a ham sandwich that, if you surveyed a reasonable sample of Americans, most would agree that we expect too much of the police. We expect them to investigate violent crimes, enforce traffic laws, provide crisis intervention, control wild animals, protect the environment, and safeguard private property. It’s unreasonable. Yet after the 2020 George Floyd protests and “Defund the Police,” most metropolitan police departments are even more overfunded and expected to perform miracles.

American police critics often say “copaganda” to describe mass media entertainment, particularly police dramas, that make law enforcement look both more necessary and more effective than they are. Defense attorney Alec Karakatsanis shifts focus onto mainstream journalism, which he accuses of papering over not only the police, but also any reasonable alternative suggestions. Our news media have the ability to decide what facts their audiences know, and what ideas the public considers “acceptable” or “mainstream.”

Why, Karakatsanis asks, do news outlets consistently lead with murder, sexual assault, and robbery? Police spend less than five percent of their time and budget on these crimes. Journalists regularly omit reporting on tax evasion, wage theft, or environmental degradation, which are crimes. But scary stories of violence sell precious ad space and keep audiences glued. Police know this, too; many PDs, hungry for voters and resources, have vastly increased public relations budgets since 2020.

Growing police budgets direct resources away from tools to address the root causes of crime. Attempts to redress poverty, housing scarcity, structural racism, and collapsing communities, seem abstract and squishy when crime leads the headlines. Indeed, many news consumers don’t know such attempts to redress even exist, because journalists cover them superficially, compared to the in-depth analysis that street crime regularly receives. What journalists bother to cover, winds up being what lives in audiences’ minds.

Karakatsanis sees three fundamental problems with crime reporting. First, journalists report words from politicians, police spokespeople, property owners, and other defenders of the status quo without meaningful analysis. Second, use of weasel words and deflections conceal the writers’ opinions behind seemingly neutral language. Third, news consumers lack critical reading background necessary to spot the first two problems. This leaves a voting populace frequently unaware that alternatives to the status quo exist, and have been tried.

Alec Karakatsanis

Scholars write extensively about law enforcement, and its alternatives. Karakatsanis describes many approaches to addressing crime without involving what he calls the “punishment bureaucracy.” (He finds the phrase “criminal justice system” falsely anodyne.) Some alternatives are strictly hypothetical, while others have been applied, mostly with success. But newspaper readers or basic cable watchers seldom see them. As Karakatsanis writes, “The news consistently fails to explain the substance of legitimate critiques of police, prosecutors, and prisons.”

Some readers might expect Karakatsanis’ analysis to have a partisan bent. Republicans have persistently made bank with “tough on crime” posturing. But journalistic defense of police has a bipartisan history. Nominally progressive politicians, and the reporters who support them, often use police to bolster their legislative bona fides. At least on “law and order” matters, there’s little daylight between the two major parties. Despite this, journalists frequently present any alternative as a Mad Max hellscape.

Worst, the problem is circular. Journalists lead with crime reportage, even now, when crime statistics are at near-record lows. Which crimes reporters consider worth covering, become the crimes politicians campaign and legislate on. The public’s perception of crime, irrespective of real-world conditions, justifies draconian police interventions, which wind up contributing to further crime. The statistically meaningless incident becomes the justification for crackdowns. Or, as Karakatsanis puts it, “Bad curation of anecdote leads to bad policy.”

These outcomes aren’t inevitable. Karakatsanis makes suggestions for how journalists could better handle volatile stories, though he admits that’s unlikely to happen under the current system of perverse incentives. Until then, Karakatsanis describes a more engaged, critical approach to news consumption. That includes a better understanding of the language, sourcing, and references which journalists use, and a willingness to seek what the current story omits. It also means asking who’s helped by the current framing.

The situation is indeed bleak. Karakatsanis makes a persuasive case that we have an incomplete, blinkered understanding of our justice landscape. The stories which define our knowledge and explain our opinions, are frequently slanted or poorly representative. But beneath it, he maintains an optimism that conditions could change, if we, the masses, consume our news more critically. We aren’t beholden to the news industrialists who distribute our stories. We can take control, if we want.

Wednesday, September 17, 2025

“Debate” Forever, Accomplish Anything Never

Charlie Kirk

Of the numerous encomiums following Charlie Kirk’s assassination last week, I’ve seen many writers praise his love of “debate.” They’ve extoled his supposed fondness for college speaking tours where the largest feature was his open-mike question sessions, where he used his proven acumen to dismantle undergraduates’ post-adolescent leftist idealism. He died under a banner emblazoned with his speaking slogan: “Prove Me Wrong.”

Recent public-facing conservatives have enjoyed the appearance of “debate.” Like Kirk, Ben Shapiro and Steven Crowder have made bank playing college campuses, inviting ill-prepared undergrads to challenge their talking points. Crowder’s notorious “Change My Mind” table banner turned him into one of the most eminently memeable recent public figures. And almost without fail, they mopped the floor with anyone who dared challenge them.

Smarter critics than me have shown how these “debates” are, at best, fatuous. Kirk, Shapiro, and Crowder arrive better prepared, often carrying reams of supporting research that undergraduates just don’t have. Shapiro is an attorney, a graduate of Harvard Law School, while Crowder is a failed stand-up comedian, so they’re simply better trained at extemporaneous speaking. Kirk’s training was informal, but his mentor, advertising exec Bill Montgomery, coached him well.

Myths of Socratic dialog, high school and college debate clubs, and quadrennial Presidential debates, have falsely elevated the ideal of “debate.” The notion that, if we talk long enough, we’ll eventually overcome our differences, underlies the principles of American representative democracy. College freshman comp classes survive on the notion that we can resolve our disputations through language. As a former comp teacher, I somewhat support that position.

However, we’ve seen how that unfolds in real life. As Rampton and Stauber write, defenders of the status quo prevent real change on important issues by sustaining debate indefinitely. As long as reasonable-sounding people keep discussing how to handle racism, war, police violence, global warming, and other issues on basic cable news, they create the illusion that controversies remain unresolved. Conservatives need not win, only keep the debate alive.

Public-facing conservatives on college campuses are the reductio ad absurdum of this reality. When men in their thirties, trained to speak quickly, and notice fiddling verbal inconsistencies, try to tackle wide-eyed undergrads, they look victorious. But that’s an illusion, created by the fact that they control the field. Just because an attorney can conduct cross-examination, or a comedian can do crowd work, doesn’t mean they’re correct.

Plato and Aristotle, depicted by Raphael

Just as importantly, the changing technological landscape means students have less information in reserve for extemporaneous discussion. Back in my teaching days, technological utopians claimed that students having information in reserve was less important than being able to access information on an as-needed basis. But these “debates” prove that, in the marketplace of image, knowing your subject matters when your opponent is stuck Googling on their smartphone.

Since I left teaching, ChatGPT and other “large language models” have diminished students’ need to formulate ideas in any depth. As I told my students, we don’t write only to communicate our ideas to others; writing also crystalizes our vague, ephemeral thoughts into a useful form, via language. But if students delegate that responsibility to artificial “intelligence,” they can’t know their own ideas, much less defend them on the fly.

Higher education, therefore, leaves students ill-prepared not only to participate in Charlie Kirk-style “debates,” but also to judge whether anybody has deeper ideas than supported by street theatre. I don’t blame teachers; I’ve known too many teachers who’ve resisted exactly this outcome. Rather, it’s a combination of bloated administration, regulations handed down by ill-informed legislatures, and a PR campaign that made Kirk look more erudite than he actually was.

Socrates saw his dialectical method, not as an abstract philosophical good, but as an approach to civic governance. In works like Republic and Phaedrus, he declared his belief that deep thinking and verbal acumen trained up worthy, empathetic rulers. But his approach required participants whose approach went beyond mere forms. It required participants sophisticated enough to admit when they were beaten, and turn words into substantive action.

Charlie Kirk was an avatar of a debate structure that prizes fast talking over deep thinking. His ability to steamroll students barely out of high school looks impressive to people who watch debates as spectator sport. But his approach favors form over substance, and winning the debate over testing the superior ideas. He was exactly the kind of rhetorician that Socrates considered an enemy of the Athenian people.

This produces a society that’s talked out, but too tired to act.ac