Top: New York Post story, September 10, 2014
Bottom: Jill Garvey on “The Leftovers,” after and before the Sudden Departure
I was a big fan of HBO’s adaptation of Tom Perrotta’s The Leftovers, which just wrapped up its first season. In the show, set in a small upstate New York town, 2% of the world’s population has suddenly disappeared, and no one knows why. A significant portion of the first season dealt with the attempts by various types of people and groups (mothers who lost their whole family, cops going crazy, cults, faith healers) to find meaning in this radically changed reality. Should the disappearance be regarded as a test of our faith? Is it just another random tragedy, just like the millions of random tragedy that happen every day, but all at once? Or is it a clear message from some higher power, a sign that we have failed, and our only hope lies in utter abnegation?
There’s been some great writing about the show. Andy Greenwald argued that it was about the experience of grief. Todd VanDerWerff sees it as a show about living with depression. And Jacon Clifton has written, fantastically, about the show’s existentialism and willingness to wrestle with questions of belief. But all these interpretations, excellent as they are, frame the show’s central metaphor as one of individual suffering: hurt, broken people, dealing with grief, depression, or meaninglessness. To me, that’s not what “The Leftovers” feels like. (The first season of In The Flesh deals, heart-rendingly, with grief and loss through the lens of zombies, and the loss there is very much individual.) Instead, the suffering on the show is collective, and public. The conflicts arise when one person interprets the meaning of the event different from other people: the Guilty Remnant insisting that the event has one specific meaning, when everyone else wanted to forget. And public grief is always political. So what are the politics of tragedy on “The Leftovers”?
Blind item: WHICH trust fund kid was repeatedly called “authentic” by bloggers? (Hint: ALT-J dummies.)— Mr. Wrongbot (@mrwrongbot)September 22, 2014
The cycle of backlash is a familair one: as soon as something becomes popular, people are immediately more interested in cutting it down to size than they were before. The naysayers would say that each case is different, and that the backlashee is truly being criticized on its own merits. But is that true? I wanted to see.
Cool job writing the hundredth shit article about Cash Cash when no one remembers the Riverside scene, you faddish fucks.— Mr. Wrongbot (@mrwrongbot)September 21, 2014
Using Python, I wrote a script for a Twitter bot called Mr. Wrongbot. It randomly selects one of the Hype Machine’s 10 most popular tracks, extracts the artist (by accessing the JSON file associated with the page), and then inserts the artist’s name into a randomly chosen insult, all of which are written beforehand with no knowledge of what the artist will be. (See tutorial here.) Are they plausible as insults? Or is the backlash really context-dependent? We’ll see!
(Code can be found here.)
If we think the present is wrong, we want the past to have been right, and to have existed in an eternal, unchanging state of rightness. But just as U2’s falling sales are the result (at least in part) of having been released in the MP3 era, Cole Porter’s success was equally as much the result of his unique historical circumstances. His success on Broadway was only possible because of the mass urbanization that had taken place in America over the last 50 years. The success of his songs independent of the stage relied on two inventions only recently popularized: radio and recorded music. Had Porter been working 20 years earlier, he would have had to rely on sheet music and home pianos for his music to spread, and would have consequently composed in a different way—and, presumably, a less successful one. We are all the product of historical circumstance, and while it is important to recognize the ways in which the present moment is different from those that came before, we have only two options for how to deal with these changes: adapt our own behavior to the new environment, or work to push through changes that will bring about some other new, more beneficial context. But there is no going back; culture is, as statisticians say, path-dependent, always determined by what came before. To pretend otherwise is de-plorable.
Ten years ago this week, Arcade Fire released Funeral, an album that not only transformed this once-ramshackle Montreal orchestro-rock collective into instant indie-rock icons, but forever transformed the very concept of indie rock from a fringe movement born of economic circumstance into an aspirational career model.
Stuart Berman is great, and everything else in this piece is insightful and accurate. But I’m not sure I’m on board with this final assertion. There have been periodic indie-rock gold rushes more or less continuously since 1987. The mere existence of a sellouts-decrying song called “Gimme Indie Rock” released in 1991 is enough to remind us that what we’re now calling “grunge” or “alternative” was very much indie at the time. (See also.) The more interesting question here is: why did indie suddenly become the power word? Even the indie rush that immediately preceded Funeral, that of the Strokes and White Stripes and Yeah Yeah Yeahs, was identified as garage rock (largely by the UK music press). Why, after nearly two decades of mainstream-bubbling indie getting renamed, did we decide to just stick with the basic classification?
The other idea here is that indie is more commercially viable than it ever was. Again, I’m not quite sure that’s right. Indie seems to be selling about as well as it always has, at least since the 90s (assuming here that we’re including all the indie music that got called something else, like “alternative,” in that count). However, it is charting higher. But as we know, sales of everything have been going down over the last decade, and it’s now possible to get a top 10 hit with less than 50,000 in sales, whereas ten years ago that would have been considered a flop. So here is another question: if MP3s didn’t currently exist, would indie still be charting this high? Should we “adjust for inflation” for indie just as we do pop, so that some of these releases would have gone platinum in the CD era? Or is it just something about the demographics of the indie audience that they’re still buying while everyone else stopped?
Komar & Melamid - “The Most Unwanted Song” (2001)
Over on Facebook, we were having fun tearing apart this poor article, which uses the results of two studies to make the argument that a) music has been getting worse over time, and b) this is making pop listeners less creative—or, as the headline put it, “pop music is literally ruining our brains.” This article aside, the study the article used to make point A analyzed the amount of variation in pop songs over time and found that, from 1955 to the present, songs became progressively less variable in terms of what sounds they used, how much they changed in pitch, and how much they changed in loudness. Almost every news outlet reporting on the study interpreted these results as “Science Proves That Pop Music Has Actually Gotten Worse.” (Actually! Literally!) In other words: less variation=bad; more variation=good.
There are a host of issues with the study. But let’s just grant the results for a second. Is that conclusion true? There’s no better counterexample than Komar and Melamid’s “The Most Unwanted Song.” They asked 500 people what things they liked least in music, and then made a song composed solely of those elements.
The most unwanted music is over 25 minutes long, veers wildly between loud and quiet sections, between fast and slow tempos, and features timbres of extremely high and low pitch, with each dichotomy presented in abrupt transition. The most unwanted orchestra was determined to be large, and features the accordion and bagpipe (which tie at 13% as the most unwanted instrument), banjo, flute, tuba, harp, organ, synthesizer (the only instrument that appears in both the most wanted and most unwanted ensembles). An operatic soprano raps and sings atonal music, advertising jingles, political slogans, and “elevator” music, and a children’s choir sings jingles and holiday songs.
By the standards of more variation=good, this is the greatest song of all time. If “The Most Unwanted Song” was passed through the prior study’s analyzer, it would score incredibly highly: lots of variation in loudness, in tone, and in pitch. But it’s really terrible. (I mean, I’ve listened to it a good hundred times, but I don’t think I’m representative.) And that’s because there’s much more to art than what can be passed through a spectral analyzer.
It would be nice if, even in this era of magical “big data,” we stopped pretending that the quality of art can be determined objectively. Even though we can come up with quantity metrics for music, those metrics say absolutely nothing about quality. A songwriter doesn’t make music better by including more chord changes or tonal variation. A songwriter makes a song better through choices about what to include and what to exclude—a quality known colloquially as taste. It’s the thing that’s pointedly missing from “The Most Unwanted Song,” and unthinkingly left absent from a depressing number of quantitative studies of music and humans. “Taste” is an unpleasant concept, because it’s terribly un-objective, and there’s always the possibility that someone has better taste than we do. But taste is central to not only to how human beings experience art, but to how they make it, too.