Showing posts with label music. Show all posts
Showing posts with label music. Show all posts

Saturday, February 14, 2026

Lee Brice in Country Music’s Nostalgia Pits

Lee Brice (promo photo)

Lee Brice debuted his new song, “Country Nowadays,” at the TPUSA Super Bowl halftime show on February 8, and it was… disappointing. Brice visibly struggled to fingerpick and sing at the same time, and gargled into the microphone with a diminished rage that, presumably, he meant to resemble J.J. Cale. The product sounded like an apprentice singer-songwriter struggling through an open-mike night in a less reputable Nashville nightclub.

More attention, though, has stuck to Brice’s lyrics. The entire show ran over half an hour, but pundits have replayed the same fifteen seconds of Brice moaning the opening lines:

I just want to cut my grass, feed my dogs, wear my boots
Not turn the TV on, sit and watch the evening news
Be told if I tell my own daughter that little boys ain’t little girls
I’d be up the creek in hot water in this cancel-your ass-world.

Jon Stewart, that paragon of nonpartisan fairness, crowed that nobody’s stopping Brice from cutting his grass, feeding his dogs, or wearing his boots. Like that’s a winning stroke. Focusing on Brice’s banal laundry list misses the point, that Brice actively aspires to be middle-class and nondescript. But he believes that knowing and caring about other people’s issues makes him oppressed in a diverse, complex world.

One recalls the ubiquitous 2012 cartoon which circulated on social media with its attribution and copyright information cropped off. A man with a military haircut and Marine Corps sleeve stripes repeatedly orders “Just coffee, black.” A spike-haired barista with a nose ring tries to upsell him several specialty coffees he doesn’t want. Of course, nobody’s ever really had this interaction, but many people think they have.

Both Lee Brice and the coffee cartoonist aspire to live in a consistent, low-friction world. If your understanding of the recent-ish past comes from mass media, you might imagine the world lacked conflict, besides the acceptable conflict of the Cold War. John Wayne movies, Leave It to Beaver, and mid-century paperback novels presented a morally concise and economically stable world, in which White protagonists could restore balance by swinging a fist.

The coffee cartoon, with its unreadable
signature (click to enlarge)

By contrast, Brice and the coffee cartoonist face the same existential terror: the world doesn’t center me anymore. Yes, I said “existential terror.” What Brice sings with maudlin angst, and the cartoon plays for yuks, is a fear-based response, struggling to understand one’s place in the world. We all face that terror when becoming adults, of course. But once upon a time, we Honkies had social roles written for us.

I’ve said this before, but it bears repeating: “bein’ country,” as Brice sang, today means being assiduously anti-modern. Country music’s founders, particularly the Carter Family and Jimmy Rogers, were assiduously engaged with current events in the Great Depression. This especially includes A.P. Carter, who couldn’t have written his greatest music without Esley Riddle, a disabled Black guitarist. Country’s origins were manifestly progressive.

But around 1964, when the Beatles overtook the pop charts, several former rockers with Southern roots found themselves artistically homeless. Johnny Cash, Conway Twitty, Jerry Lee Lewis, and others managed to reinvent themselves as country musicians by simply emphasizing their native twang. But their music shifted themes distinctly. Their lyrics looked backward to beatified sharecropper pasts, peacefully sanded of economic inequality and political friction.

In 2004, Tim McGraw released “Back When,” a similar (though less partisan) love song to the beatified past. McGraw longs for a time “back when a Coke was a Coke, and a screw was a screw.” I don’t know whether McGraw deliberately channeled Merle Haggard’s 1982 song “Are the Good Times Really Over,” in which he sang “I wish Coke was still cola, and a joint was a bad place to go.”

Haggard notably did something Brice and McGraw don’t: he slapped a date on the “good times.” He sang: “Back before Elvis, or the Beatles.” That is, before 1954, when Haggard turned 17 and saw Lefty Frizzell in concert. Haggard, like McGraw or Brice, doesn’t yearn for any specific time. He misses stage of personal development when he didn’t have to make active choices or take responsibility for his actions.

Country musicians, especially men, love to cosplay adulthood, wearing tattered work shirts and pitching their singing voices down. Yet we see this theme developed across decades: virtue exists in the past, when life lacked diversity or conflict, and half-formed youths could nestle in social roles like a hammock. Lee Brice’s political statement, like generations before him, is to refuse to face grown-up reality.

Thursday, November 13, 2025

Breaking Rust: the Final Boss of Country Music

In my father’s household, the only music was country music. And the only country music was honky-tonk. Lefty Frizell, Patsy Cline, and especially Old Hank remained the only benchmark of artistic accomplishment. Dad would permit newer, slightly innovative artists like Alabama or the Oak Ridge Boys into the house, but for all practical purposes, Dad believed good music ended when the Statler Brothers stopped having chart hits.

I remembered Dad’s rock-ribbed loyalty to old-school authenticity this week, when a generative AI song hit number one on Billboard’s Country Digital Song Sales chart. Breaking Rust appears to be an art project combining nostalgic country sounds with still images; music journalist Billy Dukes tried, without success, to track the originating artist. Critics, journalist, and armchair philosophers are debating what Breaking Rust means for future commercial music.

“Walk My Walk” is a pleasant, but undistinguished, Nashville hat-act song. Rhythmically, it’s a prison work song, and the lyrics tap into a White working-class motif of “hard livin’ won’t break me.” The digitally generated voice has a deep bass rumble with a nonspecific Southern accent, a sort of dollar-store Johnny Cash. Its message of facing poverty and setbacks with dignity and grace goes back decades in country music.

We shouldn’t over-interpret the track’s success. It topped a relatively obscure chart, not Country Airplay or Hot Country Songs. Because it exists wholly digitally, and the artist is apparently unsigned to a Big Three label, Breaking Rust’s success probably reflects the Spotify algorithm. It bears repeating, as Giblin and Doctorow write, that half of all music in America gets heard through Spotify, giving one algorithm extreme control of American taste.

That’s saying nothing about whether number one means anything anymore. I’ve written previously that, far from reflecting public taste or artistic merit, as it supposedly did during the heyday of Elvis or the Beatles, Billboard now often reserves the top slot for the blandest, most committedly inoffensive tracks. Far from the best song out there, today’s number one is the song most conducive to playing as background music while studying or commuting.

Breaking Rust is the smoothest puree of recent country music, with its sound, its lyrical themes, and even mid-tempo walking rhythm. I fear I’m repeating myself, as I’ve recently written something similar about AI-generated television. But I’ll repeat myself: AI “art” inevitably results in what mathematicians call “regression toward the mean.” That is, the larger your core sample, the closer your product falls toward dead center.

The AI-generated portrait of Breaking Rust: the ultimate banal Marlboro Man

Yet for algorithm-driven entertainment businesses, regression isn’t a flaw. Country music itself proves this. The genre was founded by hardscrabble people from poor backgrounds, artists who lived the life they described. Hank and Lefty, Jimmie Rodgers and the original Carter Family, were working people who recorded the songs of their friends and communities. Rodgers recorded his final songs literally on his deathbed, because his railroad work had destroyed his lungs.

But around 1963, when the British Invasion caused a sea change in rock music, several former rockers reinvented themselves as country singers. Johnny Cash, Conway Twitty, Jerry Lee Lewis, and even Elvis tapped into the twangy side of their Deep South artistic influences and became country musicians. But in changing their style, they also changed their market: their songs became much more past-oriented and nostalgic.

Growing up, I listened to songs like Cash’s “Tennessee Flat Top Box,” the Statlers’ “Do You Remember These,” and Johnny Lee’s “Cherokee Fiddle,” all of which yearned for a past uncluttered by responsibility, noise, and haste. Country music’s founders sang about working-class rural living—and, in the alt-country enclaves, many still do. But after 1963, naïve yearning for bygone simplicity dominated the radio mainstream.

Even before generative AI, the system reinforced itself. Longing for a simplified past became the soil where new artists grew. Neotraditionalists like Dwight Yoakam, Mary Chapin Carpenter, and Toby Keith built careers around pretending to be older than they were, and wishing they lived in a countrified candyland that only existed in songwriters’ imaginations. Much as I love Johnny and Dwight, their nostalgia became a trap I needed to escape.

In that regard, Breaking Rust is the inevitable outcome. Country music, which began life as White soul music, has been a nostalgia factory since before I was born. Moving out of my father’s house, I had to unlearn the genre’s myths of honest work, stoicism, and White hardship. Other fans chose not to unlearn it, and Breaking Rust has doubled down, becoming the perfect recursive loop of self-inflicted White persecution.

Wednesday, March 19, 2025

Chatterbox Jazz and the Victim Complex, Part Two

This essay is a follow-up to Chatterbox Jazz and the Whie Victim Complex
Another angle on the entrance to the Chatterbox Jazz Club, which only a
complete doofus would mistake for apolitical. (source)

I can’t help considering the parallels, and the lack of parallels, between Elise Hensley, who videoed herself getting ejected from the Chatterbox Jazz Club, and George Floyd. To reiterate, Hensley almost certainly recorded her expulsion deliberately, hoping to cultivate the impression of herself as an oppressed minority. But so far, the explosion of outrage she expected hasn’t arisen. It bears some time to consider why.

Hensley’s video and Darnella Frazer’s recording of George Floyd’s death might seem superficially similar to chronically online denizens. Both filmed on cellphone cameras, these videos show what their respective target audiences consider an injustice. But the online outrage machine flourishes with such displays of false equivalency. Hensley’s staged confrontation, and George Floyd’s unplanned murder, only resemble one another to lazy media consumers.

To exactly such lazy consumers, the sequence appears thusly: somebody distributed video of an injustice in progress. Millions of Americans were outraged. Protesters filled the streets. Ta-dah! We see similar reasoning in the hundreds of January 6th, 2021, rioters who live-streamed their push into the Capitol Building, speaking metaphors of Civil War and 1776: they thought simply seeing provocative media created public sentiment.

This bespeaks a specific attitude, not toward current events, but toward media. Lazy consumers see events not as events, but as content, and information distribution not as journalism, but as content creation. Functionally, Hensley doesn’t elevate herself to George Floyd’s level, she lowers George Floyd to her level. The spontaneous recording of an actual crime in progress, becomes neither better nor worse than her forced confrontation with a queer bartender.

Let me emphasize, this isn’t merely a conservative phenomenon. I’ve struggled to follow political TikTok because, Left and Right alike, it mostly consists of homebrew “journalists” either repeating somebody else’s breaking reports, or shouting angrily at like-minded believers from their car or bedroom. The read-write internet has expanded citizens’ speaking capacity to, hypothetically, infinity, depending on server space. But it’s created little new information.

But conservatives, especially White conservatives, receive one key point differently. They’ see stories of injustice multiply rapidly and gain mainstream attention, and they believe the media creates the martyrs. If martyrdom happens when cameras capture injustice, rather than when humans or institutions perform injustice, then anybody with media technology could recreate the martyrdom process. Anybody could, with a 5G connection, become a martyr.

Such lack of media literacy travels hand-in-hand with the inability to distinguish between forms of injustice. Hensley’s description of her ejection as “discrimination” suggests she thinks herself equal to Black Americans denied service at the Woolworth’s lunch counter in the 1950s. By extension, it suggests her MAGA hat equals organized resistance to injustice. She can’t see the difference, and hopes you can’t, either.

When all news is media manipulation, in other words, then all injustice, no matter how severe, no matter how authentic, becomes equal. Hensley can’t distinguish her own inconvenience from George Floyd’s death—or at least, she expects that others can’t distinguish. The meaninglessness of Hensley’s public stand, as nobody has rallied around her faux injustice, reveals that media manipulation isn’t the same as reality, and some people still can tell.

One recalls the occasional online furor surrounding some doofus who just discovered that “Born in the U.S.A.” isn’t a patriotic song, “Hallelujah” isn’t a Christmas song, and punk rock is political. These people aren’t stupid, despite the inevitable social media pile-on. Rather, these people consume all media, from music to movies to news, passively. Under those conditions, everything becomes equal, and everything becomes small.

Did Elise Hensley seriously believe herself a martyr, surviving a moment of bigoted injustice? Well, only God can judge the contents of her heart. But she evidently hoped other people would believe it, and throw their support behind her. Some evidently did, although the fervor has mostly sputtered. Without the jolt of authenticity, her media manipulation stunt gathered scarce momentum, and seems likely to disappear with the 24-hour news cycle.

The whole “fake news” phenomenon, which pundits say might’ve helped Trump into the presidency twice, relies upon the same action that Hensley attempted, mimicking real events under controlled conditions. But, like Hensley, it mostly failed to fuel real action. It might’ve helped calcify political views among people already inclined toward extreme partisan beliefs, but like Hensley, most “fake news” produced meaningless nine-day wonders.

If I’m right in my interpretation, media consumers are growing weary of manufactured outrage. The next stage will probably be performative cynicism, which is hardly better, but will be at least less terrifying.

Friday, March 14, 2025

How To Invent a Fake Pop Culture

I don’t recall when I first heard the song “Sally Go ’Round the Roses.” I know I first heard Pentangle’s folk singalong arrangement, not the Jaynetts’ Motown-tinged original. Like most listeners my age, who grew up with the mythology of Baby Boomer cultural innovation, I received that generation’s music out of sequence; the 1960s appeared like a single unit, without the history of cultural evolution that define the decade.

Therefore I didn’t understand how influential the Jaynetts’ original version really was. Its use of syncopated backbeat, gated distortion effects, and enigmatic lyrics were, in 1963, completely innovative. The British Invasion hadn’t hit America yet, with the inventive tweaks that the Beatles and the Kinks experimented with. The original label, Tuff, reportedly hated the song until another label tried to purchase it, causing Tuff to rush-release the record.

Eventually, the track hit number two on the Billboard Hot 100 chart. More important for our purposes, though, a loose collective of San Francisco-based musicians embraced it. Grace Slick recorded a rambling, psychedelic cover with her first band, The Great Society, and tried to recreate its impact with classic Jefferson Airplane tracks like “White Rabbit” and “Somebody To Love.” Much of her career involved trying to create that initial rush.

Once one understands that “Sally” came first, its influence becomes audible in other Summer of Love artists, including the Grateful Dead, Creedence Clearwater Revival, Moby Grape, and Big Brother and the Holding Company. These acts all strove to sound loopy and syncopated, and favored lyrics that admitted of multiple interpretations. Much of the “San Francisco Sound” of 1966 to 1973 consisted of riffs and jams on the “Sally” motif.

That’s why it staggered me recently when I discovered that the Jaynetts didn’t exist. Tuff producer Abner Spector crafted “Sally” with two in-house songwriters, an arranger who played most of the instruments, and a roster of contract singers, mostly young Black women. The in-house creative team played around and experimented until they created the song. It didn’t arise from struggling musicians road-testing new material for live audiences.

Grace Slick around 1966, the year she
covered “Sally Go ’Round the Roses”
with the Great Society

A New York-based studio pushed this song out of its assembly-line production system, and it became a hit. Like other bands invented for the studio, including the Monkees and the Grass Roots, the Jaynetts didn’t pay their dues, the studio system willed them into existence. They produced one orphan hit, which somehow travelled across America to create a sound-alike subculture, back when starving musicians could afford San Francisco rent.

Culture corporations, such as the Big Three labels which produce most of America’s pop music, and the Big Five studios which produce most of America’s movies, love to pretend they respond to culture. If lukewarm dribble like The Chainsmokers dominate the Hot 100, labels and radio conglomerates cover their asses by claiming they’re giving the customers what they want. Audiences decide what becomes hits; corporations only produce the product.

But “Sally’s” influence contradicts that claim. Artists respond to what they hear, and when music labels, radio, and Spotify can throttle what gets heard, artists’ ability to create is highly conditional. One recalls, for instance, that journalist Nik Cohn basically lied White disco culture into existence. Likewise, it’s questionable whether Valley Girl culture even existed before Frank and Moon Zappa riffed in Frank’s home studio.

It isn’t only that moneyed interests decide which artists get to record—a seamy but unsurprising reality. Rather, studios create artists in the studio, skimming past the countless ambitious acts playing innumerable bar and club dates while hoping for their breakthrough. This not only saves the difficulty of having to go comparison shopping for new talent, but also results in corporations wholly owning culture as subsidiaries of their brand names.

I’ve used music as my yardstick simply because discovering the Jaynetts didn’t exist rattled me recently. But we could extend this argument to multiple artistic forms. How many filmmakers like Kevin Smith, or authors like Hugh Howey, might exist out there, cranking out top-quality innovative art, hoping to become the next fluke success? And how many will quit and get day jobs because the corporations turned inward for talent?

Corporate distribution and its amplifying influence have good and bad effects. One cannot imagine seismic cultural forces like the Beatles without corporations pressing and distributing their records. But hearing Beatles records became a substitute for live music, like mimicking the Jaynetts became a substitute for inventing new culture. The result is the same: “culture” is what corporations sell, not what artists and audiences create together.

Sunday, February 9, 2025

One Foot in Bakersfield, One Foot in the Future

Dwight Yoakam, Brighter Days

Dwight Yoakam’s best music, especially his hits from nearly forty years ago, has always striven to make him sound older than he really is. His 1986 breakthrough single, “Honky Tonk Man,” was a cover of a 1956 Johnny Horton barn-burner, and his best work always strove to sound like Bakersfield, 1960-ish. But sounding older means one thing when you’re thirty; how does he maintain that strategy now at 68?

This, Yoakam’s sixteenth studio album, is emphatically not timeless. Yoakem stages a deliberate callback to his own neo-traditionalist roots, appealing for those who think history has gone badly and want to fix its errors. He channels the twangy, backbeat-heavy Central Valley country music that made him famous. In doing so, he attracts an audience who probably shares my assessment that country music went cockeyed somewhere around 1996.

Despite this, Yoakam avoids the modish cynicism that often accompanies older artists recording nostalgia bait. He’s remarkably optimistic, even on the more melancholy tracks, his expressive sadness often transitory. Perhaps this reflects the two pulls on Yoakam’s artistry: he’s always been artistically (and often ideologically) conservative. But since last recording, he became a father for the first time, at a mere sprightly 63.

So his sound remains retro, but he has a pointed hope for the future. The album opener, “Wide Open Heart,” has the aggressive chomping chords that made both country and rock sound so distinctive in Southern California in 1960. But it’s a love song, full of “She’s all mine to love” and “Come on let’s get it done.” Except it’s love for his carefully restored, chrome-plated street racer car.

Because yeah, Dwight’s old, but he’s young enough to care. This album brims with red-hot emotions for whatever gives Dwight hope enough to keep moving forward. Many seem dedicated to his wife, Emily. (Despite several high-profile relationships in the 1990s, he never married until 2020.) Others are dedicated to music, touring, and a cover of the Carter Family classic “Keep On the Sunny Side,” an ambiguous nod to spirituality.

Dwight Yoakam

The sound remains retro, certainly. He loves crunchy acoustic-electric guitars supplemented with a heavy marmalade of Hammond organ, sounding like something the Wrecking Crew would’ve mass-produced sixty years ago. Most songs maintain a steady 4/4 or 4/6 time; you could line-dance to even the album’s slowest, most navel-gazing tracks. “Can’t Be Wrong” opens with almost the same chords Yoakam used on “Please Please Baby” in 1987.

Yet notwithstanding that conservatism, Dwight sees a brighter future. “A Dream That Never Ends,” with its Laurel Canyon vocal harmonies, implies, despite its title, that love isn’t infinite, and somebody might leave—but he insists he’ll keep believing anyway. “If Only” dreams of what could happen if we shed our carefully constructed cynicism, and includes the eminently quotable line: “If only you’d choose love, love would choose you.”

“California Sky” is perhaps Yoakam’s  most thoroughly engineered track here, with its Tex-Mex guitars and slight nod to fatalism. “Hand Me Down Heart” is exactly what you’d expect from the title, a lament for the suffering he’s previously endured. But instead of surrendering to despair, he presents his heart as something capable of healing, worthy of redemption. Second chances are real in Yoakam’s world.

You might mistake Yoakam’s title track for a love song, with its unifying Tom T. Hall riff, but he wrote it for his son, who closes it with half-gibberish lyrics. “Brighter days are what you promised me,” he sings: optimistic, but with implied consequences if the promise falls through. “Bound Away” laments the touring musician’s life, like CCR’s “Traveling Band,” but with the recognition that “I’m trying to come home, I’m trying to land.”

At nearly an hour, this album runs almost twice conventional LP length. Despite the vinyl revival, Yoakam knows he’ll mostly be streamed or downloaded, and isn’t circumscribed by physical limits. This gives him freedom to play generously with composition and arrangement. Though he doesn’t do anything revolutionary—no Mike Oldfield half-hour experimentation here—he does plumb the full depth of his conservative ethos.

Listening to this album, we’re aware we’re hearing an artist who hasn’t had a Top-40 hit since 2000, and no breakout smash since 1993. His entire career is a nostalgia circuit for aging fans who need reminded why we embraced him so aggressively nearly forty years ago. Yet within that limit, Yoakam expresses an optimism his 1990s recordings sometimes forgot. Dwight’s old, yes, and so am I, but he reminds me that we both still have a future.

Friday, September 6, 2024

An Oasis in the Desert of Reality, Part Two

The original Oasis lineup from 1994, as per NME
This essay is a follow-up to An Oasis in the Desert of Reality

The recently announced Oasis reunion was followed almost immediately by something that’s become widespread in today’s music industry: price gouging. Euphemistically termed “dynamic pricing,” the online structure lets ticket outlets jack prices commensurate with surging demand. This means that, as tickets became available for the paltry seventeen dates around the UK and Ireland, computers hiked prices to £350 ($460) per ticket, outside their working-class audience’s budgets.

American audiences could summarize the exorbitant prices with two words: Taylor Swift. Tickets for Swift’s Eras Tour originally sold for $49. Quickly, however, a combination of forces, including the Ticketmaster/LiveNation merger, unauthorized resale, and limited access, bloated prices to over $4000 in some markets. Swift’s mostly young, mostly female audience base obviously can’t afford such numbers. In both cases, predatory marketing ensured that fans best positioned to appreciate the respective artists, could least afford access.

But both artists share another characteristic. Why do these acts share monolithic grips on their markets? This perhaps made sense when the Beatles upended the music industry in 1963 and 1964, when fewer media options ensured a more homogenous market. But advances in technology have granted listeners access to more radio stations, innumerable streaming services with nigh-infinite channels, and more opportunities to share music with formerly niche fandoms.

Yet increased diversity produces a paradoxical outcome, embodied in the crushing demand for limited tickets. As listeners have access to more artists, more styles, and the entire history of recorded music, demand has concentrated on a smaller number of artists. I’ve noted this trend before: as the buy-in to create and distribute music has become more affordable, the actual number-one position on the Billboard Hot 100 has become less diverse.

Some of this reflects the way conglomerate corporations throttle access. Sure, entrepreneurial artists can record music in their bedrooms at higher quality than the Beatles dreamed possible, and distribute it worldwide almost for free. But approximately half the music market today travels through one outlet, Spotify. And, as Giblin and Doctorow write, Spotify’s algorithm actively steers listeners away from indie, entrepreneurial, and otherwise non-corporate options.

Taylor Swift in 2023

Three corporations—Sony BMG, Universal, and Warner—currently control over eighty percent of the recorded music market. That’s down from four corporations controlling two-thirds of the market in 2012, the year Universal absorbed EMI. (EMI, in turn, owned Parlophone, the label which published the Beatles.) Without support from the Big Three, artists have limited access to publicity, distribution, or the payola necessary to ascend Spotify’s ladder.

Aspirational musicians might, through good business management, make a middle-class living through constant touring and indie distribution. But without a major-label contract, musicians can’t hope for basic amenities like, say, a full weekend off, much less enough pay to buy a house. And with only three corporate overlords controlling the supposedly large number of major labels, musicians will always have a severe disadvantage.

Sure, the occasional musician might beat the odds and challenge the Big Three oligarchy. Taylor Swift notoriously forced both Spotify and Apple Music to pay overdue royalties to backlist artists a decade ago by withholding her lucrative catalog. But most musicians, even successful artists with chart hits, can’t expect to ever having such influence. Citing Giblin and Doctorow again, many artists with chart hits still bleed money in today’s near-monopoly market.

This structure encourages blandness and conformity, from both artists and audiences. Oasis sounded new and different thirty years ago, but they’re now comfort listening for a middle-aged, middle-income British audience. Taylor Swift samples styles and influences so quickly that she’s become a one-stop buffet with something to please (or displease) everybody. Both artists give audiences what they expect to hear—assuming they can hear anything over the stadium crowds.

Collective problems require collective solutions. Start by supporting politicians who enforce existing antitrust and anti-monopoly laws—and not supporting the rich presidential candidate who brags about not paying contractors. But we also have private solutions, starting with attending concerts and buying merch from local and indie artists who reinvest their revenue into their communities and stagehands. Tricky for some, yes, as most music venues are bars.

The problem isn’t hypothetical; we have changed in response to a diminished market. As Spotify, LiveNation, and the Big Three have throttled the music industry, we listeners have responded by accepting diminished standards, and consuming blander art. Though we have the illusion of choice, we allow the corporations to channel us into a less diverse market, and buy their pre-screened art. Independence is neither cheap nor easy, but it’s urgently necessary.

Wednesday, September 4, 2024

An Oasis in the Desert of Reality

This photo of Liam and Noel Gallagher has accompanied the announced Oasis reunion tour

Perhaps my opinion on the announced Oasis reunion tour is distorted by me being an American who streams British media online. This makes Oasis seem both larger and smaller than they actually were: they planted twenty-four singles in the UK Top Ten, but only one in America. “Wonderwall,” obviously. Two other songs, “Don’t Look Back in Anger” and “Champaigne Supernova,” have also had lingering afterlives on alternative radio.

Thus, although I appreciate the panicked rush to purchase reunion tickets, I can’t participate. Oasis represents a place and time, one in which I couldn’t participate because worldwide streaming scarcely existed before the Gallagher brothers stopped working together in 2009. I realize that lost era means something important to those who lived through it. However, the timing seems exceptionally pointed, knowing what I do now.

The Gallagher brothers announced their reunion tour as Oasis almost exactly thirty years after the release of their record-setting debut album, Definitely Maybe. Released on 29 August 1994*, this album outsold any British freshman album until that point, and all four singles went Gold or Platinum. Though Oasis wouldn’t have an American breakout until their second album, they didn’t need one; they already had Beatles-like acclaim on their first try.

Nostalgia vendors always seem to think things were Edenic approximately thirty years ago. Growing up in America in the 1980s, popular media regaled me with giddy stories of how wonderful things were during the Eisenhower administration. TV series like Happy Days an MASH, or movies like Stand By Me and Back to the Future, though they commented on then-current events, nevertheless pitched a mythology of prelapsarian sock-hop ideals.

Importantly, these shows all spotlighted not how the 1950s were, but how the aging generation remembered them. MASH endeavored to humanize the horrors of war, during a time when Vietnam was still too current for commentary, but it did so in ways that only occasionally included any indigenous population. Happy Days was set in Milwaukee, Wisconsin, one of America’s most racially segregated cities, but featured almost no Black characters.

This whitewashed nostalgia reaches its apotheosis of silliness with Back to the Future, in the prom scene, where White Marty Mc-Fly shows a Black backup band how to really rock out. The maneuvers he tries might’ve looked shocking to the White kids on the dance floor, because they lived pre-Jimi Hendrix. But they were hardly new; Marty didn’t do anything Charly Patton and Son House hadn’t invented in the 1920s.

For Oasis to reunite almost exactly thirty years after they deluged British music, puts them in the same position. The announced reunion lineup currently includes only the Gallager brothers, omitting the revolving door of sidemen and rhythm sections that supported them for fifteen years. The band was contentious even before the brothers stopped working together; they never produced a dark horse, like George Harrison, to emerge from the wreckage.

I’ve written before that the nostalgia impulse produces the illusion of inevitability. Because events happened a certain way—because the Beatles hit number one, because Dr. King bested Bull Connor, because America beat the Soviets to the Moon—we believe things had to happen a certain way. This removes human agency from history, giving us false permission to go with the flow, like a dead salmon floating downstream.

Oasis hit the mainstage when John Major’s Tory government was walking wounded. Though Major had received a Parliamentary mandate in 1992, he was already deeply unpopular by 1994, helping Thatcherism limp timidly across history’s finish line. Like their beloved Beatles, who broke just as Alec Douglas-Home’s doomed premiership ushered the Tories out, Oasis arrived to supervise a Conservative collapse.

And now they’re reuniting as Rishi Sunak has nailed shut another Conservative coffin.

But history isn’t inevitable. Tony Blair’s nominally progressive government, like Bill Clinton’s, openly embraced moralism and militarism, before ending in the massive moral sellout of Operation Iraqi Freedom. Likewise, Kier Starmer is possibly the blandest person ever elected PM, winning only because the Tories squandered every advantage. Starmer, like the Oasis reunion, bespeaks a rare British optimism, without much of a plan.

I’d argue that most people don’t really want an Oasis reunion. I strongly doubt anyone wants to watch two White men pushing sixty warble about the troubles of 1994 onstage. Their largely British audience simply wants to time-travel to a moment when they didn’t have to know what Brexit was, who Boris Johnson is, or why a down payment on a London house is triple the average annual income.

*Out of respect, I use British dating conventions in this essay, and only this essay.

Friday, August 30, 2024

Some Overdue Thoughts on Neil Diamond

Neil Diamond in 1971, the year he
released “I Am… I Said”

I started ragging on Neil Diamond’s 1971 top-five hit “I Am… I Said” years before I heard it. Despite its high Billboard ranking, it generally isn’t regarded among Diamond’s greatest hits—let’s acknowledge, it’s no “Solitary Man” or “Sweet Caroline.” It doesn’t get extensive classic rock radio airplay like others of Diamond’s peak career recordings. Even for many fans, it’s largely a cypher.

Therefore, when humorist Dave Barry made it a recurring theme to belittle Neil Diamond in general in the 1990s, and “I Am… I Said” particularly, I didn’t blink. I knew Barry’s mockery was exaggerated for comic effect, because no matter how earnestly over-written Diamond’s hits were, hell, the man still wrote “I’m a Believer” and “Cherry Cherry,” and I’ll fight you if those aren’t classics. But “I Am… I Said”? Surely radio programmers buried it on purpose.

Barry quoted Diamond’s lyrics, particularly the central hearing-impaired chair, extensively. He said nothing about Diamond’s music, his life, or the cultural context amidst which Diamond wrote. Barry simply threw out Diamond's refrain lyrics, which aren’t exactly Robert Frost. Without context, and especially without the more subdued stanzas surrounding the refrain, the lyrics looked bathetically ridiculous, like an Angora cat in the rain.

Superficially, I had no reason to believe Dave Barry wasn’t representing Neil Diamond accurately. If I’d thought more deeply, I would’ve realized Barry also pooh-poohed “Cracklin’ Rosie,” which is maybe a bit overproduced but seriously still slaps. Cool, rational thought might’ve told me that, if Barry disparaged a banger like “Cracklin’ Rosie,” maybe his representation of “I Am… I Said” wasn’t wholly reliable.

In my limited defense, I hadn’t turned twenty yet.

Years later, I finally heard the song. When my local radio station started playing the opening riff and first stanza, I clearly identified it as belonging to the 1970s, a decade when hippie utopianism began surrendering to ennui, age, and the realization that it required more than optimism to change the world. Though most artists didn’t record anything quite this melancholy until after 1973, it’s instantly recognizable in its time.

More importantly, “I Am… I Said” is pretty good. It isn’t Diamond’s best, not in a career that produced classics like “Red Red Wine” and “Kentucky Woman,” but it’s a substantial glimpse into the psyche of a man facing his own age and mortality. The contrast between Diamond’s understated, more poetically complex stanzas, and ostentatious orchestra behind his choppy refrain, presages later anthems to adult futility, like Nirvana’s “Smells Like Teen Spirit.”

Neil Diamond in 2018, the year his
health forced him to retire from touring

I believed Dave Barry’s criticisms in the 1990s because I hadn’t yet heard Diamond’s song, and I presumed Barry represented the song accurately. I realized Barry, a humorist, might privilege the joke above facts. Yet in 1993, when one couldn’t check YouTube or Spotify to verify the source, I chose to assume Barry was essentially honest. I adopted Barry’s jokes as my own opinion, and repeated them for nearly thirty years.

Everyone sometimes adopts others’ opinions as our own. Nobody can possibly have encyclopedic knowledge of, say, climate science or presidential politics or big-ticket TV productions. We must trust scholars, critics, friends, and others. When that happens, we must obviously evaluate whether that person’s opinion is trustworthy enough. Is the scholar scholarly enough to be reliable? Has the movie reviewer seen enough movies?

Dave Barry is probably the funniest White person of my lifetime, a man who often extracted comedy from well-written descriptions of furniture. He commanded language to cultivate emotions in readers, without depending on voice and performance, a mark of somebody who thinks deeply about every word and phrase. Because he commanded written English with an ease I find enviable, I presumed Barry must’ve thought equally deeply about his subjects.

It never occurred to me that Barry might’ve misrepresented his subject, or omitted information that would’ve influenced my opinion, such as Diamond turning thirty, divorcing his high school sweetheart, or having little to show for his career. I trusted the evaluation of a critic who, it appears, was more invested in the joke than the facts. Barry’s take-down of Diamond’s lyrics remains hilarious, but frustratingly divorced from reality.

This forces me to ponder: what other untrustworthy “experts” have I trusted? As an ex-Republican, I certainly shouldn’t have trusted P.J. O’Rourke and Thomas Sowell, who influenced my early politics. My parents admitted the ideas they taught me were often informed by fear. Much of adulthood involves purging false teachings from untrustworthy mentors who concealed their agendas.

And that chair totally heard you, dude.

Thursday, April 18, 2024

Hold Onto Sixteen As Long As You Can

John Mellencamp

Classic rock radio stalwart John Mellencamp got an unwanted attention boost this week when a month-old video of him abandoning the stage went viral. Apparently Mellencamp paused to speak directly to his audience, something musicians frequently do, but an audience member heckled him to shut up and resume playing. An outraged Mellencamp quit playing partway through “Jack and Diane,” leaving an arena audience in the lurch.

Several sources, including Fox News, spun this event as Mellencamp feuding with the audience over politics. Like most “heartland rockers,” including Bruce Springsteen, Bob Seger, and John Fogerty, Mellencamp’s politics skew left. This should surprise nobody who’s listened to Mellencamp’s lyrics—but apparently, several audience stalwart haven’t done so. Listeners are often gobsmacked to discover that their favorite heartland rockers are progressives who don’t just love being rural.

This spotlights a growing rift between artists like Mellencamp, and the largest number of their fans. We saw something similar when former New Jersey governor Chris Christie mentioned his love of Springsteen, and Springsteen responded by duetting with comedian Jimmy Fallon to mock Christie’s performance as governor. These rockers maintain the leftist, anti-establishment passions of their youth, while their audiences have become more conservative and revanchist.

Pop history tells us that “heartland rock” emerged in the middle 1970s: Springsteen’s first hit, “Born to Run,” hit the Billboard Top 40 in 1975, while Tom Petty’s first hit, “Breakdown,” barely creased the Top 40 in 1977. However, this ignores that both artists never developed legitimate star power until the 1980s. It also disregards both Bob Seger, and John Fogerty’s original band Creedence Clearwater Revival, which had their first hits in the 1960s.

Bruce Springsteen

From its origins, heartland rock bore a contradiction. Though its chief songwriters have pressed progressive politics and a disdain for capitalism into their lyrics, their musical stylings were persistently conservative. Fogerty deliberately channeled musical stylings from Delta blues and Memphis soul, while Petty’s sound grew, like Spanish moss, from the swampy slumgullion of influences in his inland northern Florida upbringing.

Thus, conservative audiences who don’t listen deeply have always thought their favorite heartland rockers spoke directly to them. The most famous example, of course, must be Ronald Reagan’s attempt to conscript Springsteen into his 1984 reelection campaign. But my personal favorite comes from TikTok. A whyte-boy in a backward ballcap and a pick-em-up truck shouts “Thank God my mom didn’t raise no f**king liberal!” before tearing off scream-singing with CCR’s “Fortunate Son.”

The complete failure to understand the left-leaning message in these lyrics might seem baffling, except that I once shared it. I’ve written about this before: listening to classic rock radio during my rebellious teenage years allowed me to consider myself forward-thinking because I engaged with the injustices of the Vietnam era. By pretending to care about injustice back then, I allowed myself to passively participate with injustices occurring right now.

There’s nothing innately conservative about consuming media shallowly, but in my experience, people who don’t parse for greater depth usually have conservative politics. Conservatives love surface-level readings. My lifelong Republican parents encouraged me to reject deeper textual analysis of literature, even when high-school English teachers graded me for doing so. Listening to classic rock at the surface level often rewards conservative readings of its time.

Heartland rockers were classic rock before the “classic rock radio” category was invented.

John Fogerty

Surviving heartland rockers like Mellencamp, Springsteen, and Melissa Etheridge continue recording, but they haven’t had Top 40 hits since the middle 1990s. Fogerty, who’s always had a contentious relationship with the recording industry, hasn’t meaningfully charted a single since 1985. Though they all continue touring, they’ve become oldies circuit staples, their concerts consisting primarily of songs first heard forty, fifty, or more years ago.

Like the artists themselves, their audiences have continued aging. The greasers and slicks who got energized for Springsteen’s fight against small-town malaise in 1975, now have mortgages, student debt, and children. Such material investments in the status quo encourage, if not principled conservatism, at least a desire to ensure they didn’t invest themselves in hot air. The audiences have grown away from the artists they admire.

Perhaps the most telling fact is whom these artists now influence. Jake Owens’ “I Was Jack (You Were Diane)” and Eric Church’s “Springsteen” were massive country hits, channeling the artists they name-dropped. But both songs reduce their tribute subjects to mere nostalgia for whyte audiences. These artists, now in their seventies, have become the thing their teenage selves rebelled against. There’s no coming back from that.

Wednesday, February 14, 2024

In Dispraise of “Originality”

Jimmy Page (with guitar) and Robert Plant of Led Zeppelin

“You mean other people are allowed to use and repurpose music that already exists?” Sarah exclaimed, eyes wide and jaw dropped. “When I took the composition class in college, they insisted I had to invent my music out of whole cloth!”

I’ve forgotten how we reached the topic—casual conversation is frequently winding and byzantine. But I’d mentioned the multiple lawsuits surrounding Led Zeppelin, who have costly judgements against them for appropriating works by Black American songwriters, making fiddling changes, and slapping their own bylines on them. I’d explained the likely explanation: there’s a long blues tradition of songwriting by jamming around existing songs until a new song emerges.

This left Sarah flabbergasted. “My professor so thoroughly insisted on complete originality that she demanded we start composing with random notes, and building the piece around that.” I could completely believe that, too. Having attended our college’s new music showcase a few times, I remembered the preponderance of discordant, atonal music. I thought every undergraduate considered themselves another John Cage. Turns out the professor liked that effect.

Sarah felt faux outrage at the injustice of having been told that every composition had to be original. (Okay, “outrage” is overselling it. But definitely astonishment.) Yet while classical and orchestral composers, trained in frequently grueling college and conservatory programs, have an ethos of complete originality drilled into them persistently. Meanwhile, working songwriters crafting genres people actually pay money to hear, pinch and repurpose existing themes regularly.

Bob Dylan described his early songwriting style as “love and theft.” His earliest recordings show how frequently he lightly reworked existing Woody Guthrie or Dave Van Ronk tunes. Only with years of experience did he develop his own songwriting voice. Lennon and McCartney are among the bestselling songwriters ever, yet three of the Beatles’ first four albums are half cover songs, because the Beatles hadn’t found their voices yet.

I have no songwriting experience; I’m about as musical as a steel anchor. Yet in college creative writing and playwrighting classes, my textbooks espoused a similar ethos of complete originality. I remember one textbook pooh-poohing genre fiction as a “guided tour” of existing repurposed themes, while “literary” fiction always strives to be completely original. Don’t be like those popular paperback writers, the textbook urged; always create something new.

Our professor smiled ruefully and reminded us that textbook authors have their blind spots, too.

The Beatles, photographed at the peak of their star power

Literary authors and playwrights mimic one another relentlessly, and their genres are intensely fad-driven. As a playwright, it too me years to shed David Mamet’s influence, like songwriters struggle to differentiate themselves from Dylan. Most college-educated American writers pass through their John Steinbeck, Elmore Leonard, and Toni Morrison phases before achieving distinct voices. The lucky few see those exercises published.

Originality emerges in art, where it does, only gradually. Both Salvador Dalí and Pablo Picasso, famous for nonconforming paintings, began their careers with Renaissance-style portraits and church scenes. Jackson Pollock tried several techniques before uncovering his dribbling, wholly non-objective Abstract Impressionist style. Importantly, all these artists were disparaged when their approaches first appeared; they achieved acclaim only latterly, sometimes posthumously.

Yet even incidental mimicry draws ire. Returning to music. Former Beatle George Harrison’s signature hit, “My Sweet Lord,” made his solo career. Yet within months of release, lawyers fired off a lawsuit because it resembled the Chiffons’ forgettable 1963 hit “He’s So Fine.” That lawsuit commenced in 1971, and wasn’t wholly resolved until 1998, dominating his solo career, and rendering him timid as a songwriter forever after.

This trend achieved its culmination with the “Blurred Lines” lawsuit. Heirs of Marvin Gaye claimed the songwriters behind Robin Thicke’s icky 2013 hit stole Gaye’s “groove.” That is, they claimed the song resembled, not something Marvin Gaye wrote, but something Marvin Gaye could have written, and therefore was plagiarized. And they won. This sets a courtroom precedent that simply imitating venerable artists, even while creating wholly new art, is plagiarism.

No wonder Sarah’s college composition professor (now retired) favored originality over tone. Instructors, textbook authors, and now courts demand that artists constantly reinvent the wheel. Blues icons jamming in some underlit cellar are now plagiarists, not artists. Don’t build your next track around a riff from “Crossroads” or “John the Revelator,” boys, the boundaries of ownership are set!

Artists aren’t unique individuals; they’re a community of give-and-take, constantly improving one another’s raw material. Yet the ownership ethos demands nobody pinch from anybody, even incidentally. The mere fact that working artists have never done this doesn’t change the story.

Wednesday, February 7, 2024

Some Parting Thoughts on Toby Keith

Toby Keith as he appeared at his debut,
with that signature icky 1990s mullet

When Toby Keith’s debut single, “Should’ve Been a Cowboy,” raced to #1 on the Billboard country charts in 1993, I still listened to country radio. I hadn’t grown jaded on the peppy country-pop hybrid that would overtake mainstream country music in the 1990s, an overtaking that Keith helped facilitate. Therefore, I heard it go into regular rotation, as country disc jockeys praised Keith’s tapping into the key country music zeitgeist.

“Should’ve Been a Cowboy” dropped when Keith was 31 years old. That’s older than most aspiring musicians get before they quit, disgusted with dead-end opportunities and industry gatekeepers. It’s also remarkably old to debut in country music. Despite its middle-aged conservatism, since the 1990s, country music has notably disdained artists past forty. America teems with musicians every bit as competent and inventive as Johnny Cash who quit because they needed groceries.

Despite being only nineteen myself, I recognized the sentiment dominating “Should’ve Been a Cowboy.” Keith’s surface-level themes aren’t exactly concealed: his life lacks spark that would’ve been present had he lived in another time and place. I appreciated the sentiment as, in 1993, I struggled with meaningless jobs while living in a Western Nebraska town that celebrated its cattle drive-era heyday. It’s impossible to ignore the past’s alluring appeal.

However, I also recognized something below Keith’s surface-level themes. Despite longing to be a “cowboy,” his song never mentions the workaday tedium of cowpunching. Instead, he cites Gunsmoke, The Lone Ranger, and the classic “Singing Cowboys,” Gene Autry and Roy Rogers. He wants to romance women, chase outlaws, and sing around the campfire. He presents a cowboy mythology completely devoid of actual cattle work.

In 1993, I lacked the vocabulary to explain something that I innately understood, but would only verbalize years later: the legendary Wild West didn’t exist. American culture celebrated cowboys only after they were dead, inventing fine-sounding fables about heroism, hard work, and gun-barrel justice. Owen Wister’s The Virginian, the novel which defined the Western genre, begins with a prelude lamenting that cowboys, like chivalric knights, now remain only in memory.

So in extolling cowboy goodness, Toby Keith yearned to time-travel to a time that existed only in paperback novels and Hollywood fantasies. He wanted a life with the dreary bits removed, with moral ambiguity excised, with heroes and villains clearly demarcated by the color of their Stetson hats. I don’t say this unsympathetically: Keith, a former oil derrick worker, understood intimately how modern labor strips life of meaning.

Toby Keith as he appeared in 2023, thinned
by the cancer that eventually killed him

Yet the yearning for moral clarity and escapism in “Should’ve Been a Cowboy,” eventually overtook Keith’s work. Through the 1990s, Keith released middle-of-the-road Nashville fare like “How Do You Like Me Now?” and “I Wanna Talk About Me,” songs that were okay and did well on the charts. But he struggled to find his unique voice. This matters especially since, unlike other controversial Nashville artists, Keith wrote his own material.

He finally found his metier after September 11, 2001. That’s when he released the songs likely to define his legacy: “Courtesy of the Red, White, and Blue (The Angry American)” and “Beer For My Horses.” The former, known colloquially as “the Boot in the Ass song,” became an anthem of pro-war Americans during Bush’s War on Terror. The latter is a cartoonish pro-lynching song extolling cowboy myths of civilian justice.

That’s when the cosplay cowboy ethos behind Keith’s breakout single finally consumed him. No longer satisfied yearning for a past that never existed, Keith dropped himself narratively into America’s moral conflicts. Backed by Nashville’s multi-million-dollar publicity machine, he pretended to be a sandbox soldier and civilian justice-bringer. He traveled the lucrative arena circuit whipping audiences to think likewise.

Keith’s musical persona embraced absolutes. He favored “this country that I love” and inveighed “against evil forces.” He never explained exactly how he identified evil forces, except that they didn’t love America like him. In Keith’s world, apparently, evil is as evil does. Morality equated to conformity, pro-Americanism, and buying into the official state narrative. His cosplay righteousness wasn’t in the past anymore, it was merely silenced in the present.

But the shine eventually wore off the War on Terror. As America abandoned absolutism and relearned that difficult situations deserved more nuanced treatment, Keith stopped making hits. His songs remain popular with the flag-waving crowd, but he last creased country’s Billboard top twenty in 2012. His final years were characterized by silliness like “Red Solo Cup.” Country music moved on, leaving his absolutism behind. So, I hope, did the country.

Friday, November 17, 2023

Meg Myers Speaks a Cold and Distant Truth

Meg Myers, TZIA

I needed longer than usual to embrace Meg Myers’ third LP-length album, not because of the music, but because of her amended image. Her previous albums foregrounded her beauty, but in ways that subverted White Euro-American standards. Her redesign into a strange, Star Trek-like dominatrix, seemed too abrupt. Then somebody reminded me of David Bowie’s Diamond Dogs album, with its body horror-influenced art, and I finally glimpsed Myers’ intent.

Like Bowie, Myers has apparently decided to periodically reinvent herself to ensure that she, and her audience, never become complacent. This new image accompanies Myers’ rejection of the “Big Sad” character she’s previously played. This album contains several songs explicitly declaring how she’s no longer beholden to the demons from her past. Which is personally empowering, sure; but as art, this album feels more like a TED Talk than music.

Several tracks have lyrics so declarative, I can only call them thesis statements. Lines like “I know the truth is inside of me, I hold the key” (from “A New Society”) or “A call for all the people, Who stand for what is right, From different places, We all unite” (from “Sophia <144>”) bespeak the energy Myers wants to convey. She’s no longer content describing her pains from a personal, introspective angle. She’d rather unify listeners in rebellion against the conditions that made those pains possible.

This puts me, the listener, in an awkward position. I respect the hippie-esque protest anthem motivation. Pop music has a long history of demanding the world do better, that it show more respect to those most abused by our culture and economy. Many of these songs, written in a very square 4/4 time, are perfect for marching on public squares and national monuments. Myers clearly wants to create a pop-art manifesto for a post-Me-Too world.

Yet something feels missing. Most tracks have a synth-driven background with a programmed percussion track—the personnel list names a human drummer on only two songs. This results in hypnotic, looping rhythms on most songs, like a heavier ‘Hearts of Space” trance. Looking back on classic protest songs, like “Peace Train” or “Fortunate Son,” these songs shared an important quality: audiences could sing along. That’s far harder here.

Meg Myers

Myers’ thesis statements are well-grounded, mostly. She decries the ways culture moralistically controls women’s sexuality, while ironically foregrounding sex, with lines like “Victimized, I’ve been tied to bedposts” (from “Me”). She excoriates the ways women, including herself, manage men’s emotions for so long that they become deaf to their own needs, in “My Mirror.” The song “Searching For the Truth” begins with the self-explanatory lines:

Everybody’s hiding from their fears
Spinning in their cycles all alone
With a hand over one eye
Disconnected pieces of a whole

I appreciate these messages, which would arguably make good stump speeches. But since Myers tells us how to receive her songs directly in the lyrics, and we’d struggle to sing along with her trance-inducing rhythms, I struggle to understand why she wrote them as songs. She isn’t inviting us listeners on a journey, she’s lecturing to us based on her hard-won experience. Basically she’s channeling her inner indie-pop Rebecca Solnit.

As a result, this album’s most intensely felt song is probably the only one she didn’t co-write. When I saw the title “Numb” on the track listing, I assumed she’d re-recorded her own song of the same title. Nope, she’s covered Linkin Park’s icky 2003 hate-lust anthem, possibly on a dare. Her understated arrangement here serves her message, as a synth drone and Myers herself on harp create a disconnected, ethereal soundscape. The collision with the original version is palpable.

In the decade since her first EP, Myers has reinvented herself constantly. Among other things, she’s shaved her head after each album tour. She’s given conflicting reviews of her earliest recordings, sometimes claiming she was constrained and controlled, other times claiming her collaborations with Andrew Rosen and Atlantic Records brought her to technical musical maturity. Maybe that explains this album’s line: “It’s time to give yourself all of the love you’ve been missing.”

Despite what I’ve said, this album does have admirable songs. Tracks like “Bluebird” and “Waste of Confetti” stop the lecturing tone and instead invite listeners on Myers’ unique journey. But they don’t come together to create an album the way her previous two LPs did. Perhaps this is a transitional album. I’ve previously felt drawn to Meg Myers’ personal, confessional lyric style. Sadly, it feels she’s now holding us at arm’s length.

Friday, November 3, 2023

The Beatles and the Past That Never Ends

The Beatles, photographed at the peak of their star power

When the surviving Beatles released two new songs in 1995, “Free as a Bird” and “Real Love,” built around John Lennon’s home demos, I was giddy. I bought the CDs promptly, and listened to them repeatedly, with the devotion a better-connected audiophile might’ve dedicated to finding a lost 78 by Billie Holliday. I mean, okay, the Beatles were never going to tour again, and these two tracks were in-studio novelties. But c’mon, man, it was the Beatles!

This week, a third and final post-breakup track dropped, “Now and Then.” This time, instead of whirling anticipation, I felt a gut-clench of dread. The landscape has changed since 1995. Just as important, though, I have changed. Yet despite my trepidation, I listened when the track dropped. Of course I did; millions of people worldwide listed simultaneously. And while the overall response has been positive, I heard the new track and felt… nothing.

“Now and Then” reads like a sweeping homage to everything the Beatles recorded post-Sgt. Pepper. The combination of psychedelic guitars, lush strings, and tight vocal harmonies, all reflect the forces driving the Beatles’ late-stage sound. It’s a perfect Beatles encomium—too perfect. The track sounds like something a particularly skillful Beatles tribute band might’ve composed after too many all-night benders, striving to unlock that elusive Lennon-McCartney magic.

I’ve written before about my efforts to become a baby boomer. I styled myself according to hippie peacenik conventions, listened to boomer rock, and told anyone listening how much I disdained the sounds of my generation. I imagined myself quite the activist, fighting the battles of 1968 with committed aplomb. The battles of my generation? Meh. By committing myself to the causes and culture of my parents’ youth, I convinced myself of my innate goodness.

Classic rock radio was the mass-media manifestation of this commitment. By hearing the great songs and great songwriters of the 1960s and 1970s repeated constantly, with new music never treading upon my consciousness, I convinced myself that I was a branded soldier for civil rights. “Branded” turns out to have been right, too. Only years later would I realize how thoroughly classic rock radio curated a well-scrubbed, anodyne version of that generation.

America has over 2,000 “classic rock,” “classic hits,” and “oldies” stations. Together, that’s more than any format except country music, and all three formats play the Beatles at least occasionally. Both separately and together, the Beatles continue to steer the sounds of Anglophonic pop culture, their repertoire ransacked by countless rock and pop bands, their style mimicked by acts eager to recapture their mojo. All these acts sound distressingly the same.

Since radio ownership regulations came down following the Telecommunications Act of 1996, radio has become monolithic. Three conglomerates control almost eighty percent of America’s radio broadcasting: iHeartMedia (formerly Clear Channel), Audacy, and NBC. Likewise, three conglomerates dominate the recording industry, and three conglomerates control music publishing. These conglomerates almost don’t matter, though, since half of all music gets heard through one outlet: Spotify.

In other words, the music industry has become less diverse, more concentrated, and less receptive to public taste since the Beatles’ last new recordings in 1995. While the occasional Justin Bieber squeaks through, recording home cover versions for YouTube, most hitmakers’ careers are tightly controlled. Olivia Rodrigo and Selena Gomez, who appeared to emerge from nowhere, were cultivated by Disney for years before their breakthroughs.

In past generations, most music executives were musicians. George Martin, who produced every Beatles recording except the final three, was a jazz keyboardist who released several sides which went nowhere. Nowadays, music executives are bean counters with MBAs, monumentally risk-averse and beholden to what’s worked before. Therefore, today’s new releases persistently sound identical. New rock music is risky; let listeners re-hear the same “classics” their parents and grandparents loved.

From this milieu emerges a Beatles recording that sounds like a perfect amalgamation of everything the Fab Four recorded after 1966. And yes, it’s indeed perfect. It’s exactly what the band created in their notorious overnight sessions after they’d stopped touring and began investing full-time in their passion projects. Beatles fans like me should gobble it up; most will.

Yet I feel cold. George Harrison died twenty-two years ago, and John Lennon has been dead longer than he was alive, but their bandmates, and their label, won’t let them rest. The demand to produce a capstone song, which nobody knew was missing, sixty years later, has resulted in a paint-by-numbers product. Some things are just supposed to end.

Monday, July 24, 2023

Jason Aldean and the Nashville Outrage Machine

A still from Jason Aldean’s “Try That in a Small Town” video

Most mainstream country music artists don’t write their own songs. I didn’t know this, growing up listening to Nashville’s Finest. While some artists, like Willie Nelson and Loretta Lynn, write much of their own material, most mainstream country is written by contract songwriters, much like how most rock music was written by Brill Building songwriters before the Beatles popularized the idea that rockers should write their own material.

This includes Jason Aldean, whose latest lukewarm yodel, “Try That in a Small Town,” polarized music fans recently. With its thinly veiled hints of violence against both criminals and protesters, and its music video shot at the location of at least two lynchings, has drawn enough ire for CMT to discontinue airing the video. The battle, however, has driven the song up some charts, because come on now, controversy sells.

Except, again: Aldean doesn’t write his own material. Four Nashville contract writers scribed “Try That in a Small Town”: Kelley Lovelace, Neil Thrasher, Tully Kennedy, and Kurt Allison. The latter two are members of Aldean’s backing band, and probably appear in the controversial video (I couldn’t pick them from a lineup). Of the thirty-eight singles Aldean has released to date, he has writing credits on exactly none.

Aldean isn’t the first Nashville artist to release songs described as pro-lynching. Without Googling, I can quickly name Charlie Danels’ “Simple Man,” and several songs by Hank Williams, Jr. Daniels and Williams are arguably worse, because they do write their own material. Aldean could defend himself by claiming he’s just parroting contract writers’ words (though he hasn’t). Daniels’ and Williams’ violent fantasies are decidedly their own opinions.

Jason Aldean live on stage last week

The longer I live with this, though, the more I realize: that makes Aldean’s position less defensible, not more. Because Aldean signed away much creative control over his stage persona, he must surely live surrounded by corporate bureaucrats who vet decisions. Production management, record company executives, PR professionals, and others must’ve read they lyrics sheet, heard the studio rough mixes, seen the video storyboard, and said: “Let’s run with it.”

Giblin and Doctorow write about how tightly controlled today’s music industry generally is. The Big Three music conglomerates—Universal, Sony, and Warner—are completely controlled by bean counters and MBA graduates, not by musicians, like in Johnny Cash’s heyday. Output is tightly controlled, and no single gets released without extensive human and software scrutiny to ensure it sufficiently resembles past hits, which is probably why most “hit” music is repetitive.

“Try That in a Small Town” must’ve endured countless layers of scrutiny before being released, and the chance that nobody heard or read the lyrics without realizing they were racially coded is infinitesimal. The chance that nobody on the creative or technical team didn’t know that the Maury County Courthouse, where the video was shot, saw two lynchings, is laughably small. Somebody, probably several somebodies, green-lighted it anyway.

(As an aside, Aldean, like Daniels and Williams before him, never directly references race. He uses racially coded terminology; “carjack” in particular has been a racially coded term since at least the 1990s. But none of these artists directly mentions race. That’s how dog whistle language works: they use jargon they know their audience will interpret racially, but only obliquely. Then they blame their opponents for directly mentioning race first.)

For that many people to ratify shipping a sundown town anthem, the decision must’ve been conscious. Somebody within the bowels of the corporation must’ve deliberately decided to sell a song guaranteed to evoke anger. The corporation, therefore, must’ve willfully decided to poke a wasp’s nest, presumably because controversy sells. The polarizing conflicts arising from this song aren’t incidental; they’re almost certainly calculated and intentional.

This just demonstrates that old saw, that the rich aren’t your friends. They’ll pick your pocket while telling you that “other” over there—that immigrant, that anti-police protester, that Hollywood hillbilly—is your real enemy. They’ll burn your house down to collect the insurance, then act dumbfounded when you ask who struck the match. I believe that’s what happened here: they provoked a fight for the money.

And the longer the story continues, the more free publicity it receives. Given what we know about how money moves in the music industry, very little probably goes to Aldean himself, or his contract songwriters. The beneficiaries of this controversy are his record company, a BMG subsidiary, and his concert promoter, which is probably LiveNation. This fake controversy is a cash subsidy to billionaires. And we keep supplying it.

Thursday, September 22, 2022

Creedence Clearwater: Burnin’ Hard and Burnin’ Out

Bob Smeaton (director), Travelin’ Band: Creedence Clearwater Revival Live at the Royal Albert Hall

Promo photo taken during Creedence Clearwater’s only European tour

Fans of the rock band Creedence Clearwater Revival often discuss them in awe-filled, nearly reverential terms. I should know, I’m one of them. They burned fast and hard, producing five albums in 1969 and 1970, working under singer-songwriter John Fogerty’s iron-fisted rule. But they burned out equally quickly: by early 1971, John’s brother Tom Fogerty quit the band, and they barely limped across the finish line a year later.

This documentary depicts CCR’s April 1970 performance at London’s Royal Albert Hall. That concert, though, runs barely 45 minutes, too short to constitute a feature-length performance. So to flesh out the performance, the film is bookended with a documentary, more a hagiography, narrated by Jeff Bridges. This recounting presents CCR as “the biggest band in the world,” presented as literal heirs to the Beatles, who disbanded weeks before this concert.

Reviewing something like this, we’re split between two forces. The concert itself is solid, fast-moving, and entertaining, a breathless display of CCR’s aggressively American musical ethos in an iconic British venue. The documentary is… something else. It was clearly written in an attempt to recapture the experience of being a CCR fan in Spring of 1970, and blithely ignores much of what we now know was happening behind the scenes.

Bridges’ linking narration, written by John Harris, starts with the band’s origins in El Cerrito, California. Anchored by kids who’d known each other since middle school, the band, initially known as the Blue Velvets, consciously rejected British Invasion influences and played blues-rock based on Buddy Guy and Leadbelly. Snippets of seldom-heard 45s sound anachronistic for the middle 1960s, granting insight into John Fogerty’s early anti-rock influences.

As engaging as this chronicle is, though, fans can’t ignore that by 1970, fault lines were already developing. We know this, but the documentary apparently doesn’t. Tom Fogerty in particular appears disconnected from the band, stone-faced and out of sync even while singing classics like “Bad Moon Rising” and “Proud Mary.” Worse, the narration is uncritical of Fantasy Records, with whom CCR notoriously had one of rock’s most lopsided contracts.

Therefore we know, but the documentary apparently doesn’t, that the seeds of breakup already existed. Like the Beatles’ Get Back, which shows the band playing live on a London rooftop years after they’d already disbanded, the documentary depicts the illusions of the moment, not the historical scope. CCR appears only in archive footage, film shot and curated in 1970 by Fantasy Records’ PR department to sell albums, not depict reality.

A still from the performance video

After the throat-clearing, the film transitions to the concert itself. Here’s the part we actually wanted. CCR plays with a musical alacrity most acts only capture briefly: old enough to know their instruments and play with passion bordering on violence, but young enough to withstand the hot lights and screaming crowds. They play (most of) their classics for an audience to whom this music is still new and dangerous.

The 52-year-old footage shows signs of the times. Camera operators keep trying to capture band members, especially John Fogerty, in tight face holds, a common maneuver in 1970, substantially undercut by Fogerty’s refusal to stand still. John’s brother Tom plays rhythm on a big white arch-top guitar, but mostly stands still, frequently overshadowed by the amp stacks. Camera operators mostly ignore him, which is unfortunate.

I saw John Fogerty perform live in 1997, by which time “John Fogerty” had become a stage character as much as an individual. Like Mark Twain, Fogerty wore his persona consciously, with his affected semi-New Orleans accent and nostalgic ramblings. That Fogerty isn’t on display here. This Fogerty is soft-spoken but hard-rocking, aggressively belting hits with almost no stage banter. He appreciates the audience without courting them.

This is a period piece. The footage, though digitally restored, is aged, and the sound reflects 1970 amplifier technology. But I appreciate one aspect: unlike The Band’s The Last Waltz or the Rolling Stones’ Gimme Shelter, this film doesn’t interrupt the concert. CCR plays straight through, without self-conscious cinematic intrusion. One suspects this might be what it felt like to see them playing at their peak.

In the final moments, Bridges’ narration calls CCR “the biggest band in the world.” But ten months later, they were already disbanding. Fans will watch this performance with a fatalism that the documentary tries to avoid. Notably, no band members were involved with this documentary; it’s a product of the money machine, not art. But it’s also a tight, muscular performance of a band whose work really mattered.