Tag Archives: Philosophy

Fall: Verb, Noun, Season, Metaphor

fall

Although I’m facing a late summer heat wave, and it’s  still about three weeks away, the beginning of school makes me think it’s fall.  It’s a strange word, “fall”: really a verb—action word!—technically also a noun.  Kids can recite “person, place, or thing” in a heartbeat, but fall is not any of these, not even exactly a thing.  Ideas are also nouns, but fall is not quite an idea.  Yes, in most parts of the world the temperature and weather literally change.  But seasons are also metaphors, and the idea of fall is the most powerful one.

Many people say they love spring.  But spring is a cliché.  Even the name “spring” sounds too eager to please, too self-helpy, archaic slang that should have gone the way of “keen” or “corking” or “moxie.”  Warmer weather, longer days, shorter clothes, life in bloom, fertility symbols like bunnies and eggs [1], school almost out, and, if you’re into that sort of thing, resurrections.   What’s not to like?  Spring ahead, fall behind.

It takes a special person to love fall.   Trees sense the cold and pull back unto themselves, sacrificing their own expendable body parts for the upcoming months of darkness to save the whole, like trapped animals gnawing off their legs.  The leaves self sacrifice for the greater good, tiny reverse lifeboats abandoning ship, each a desiccated little martyr and hero.

We imagine that it’s the leaves that do the falling.  But people also retreat in winter as well: into more interesting clothes, and the interiors of home and self, even more comforting knowing that it’s getting cold and dark outside.  And some of us like the feeling of falling.

Our language reflects fall’s pleasant equivocality.  We speak of falling asleep, as something that happens almost by itself, pleasantly passive even as millions actively take medication and work hard to achieve it.  You’d think falling would be easy.  Then, once we do satisfyingly fall asleep, many of have recurring nightmares. About falling.

Warning: this is not a metaphor!

Warning: this is not a metaphor!

We fall in love, the language itself shaping our understanding of life’s most delicate/ confusing/ overwhelming/ important/ wonderful/terrible feeling.  Fall suggests the suddenness of love at first sight, the helplessness, lack of control, and even danger.  I fell for her so hard.  Sounds painful.  Sometimes it is.  Unlike real falling, but like falling sleep, trying to fall in love will probably prevent it.  What would happen, though, if we did not fall in love, but, say, flew in love—or settled in love?  Floated in love, or ran in love?  Poured or drew or brewed or even stewed… in love?  Crashed in love?  When I met her, we didn’t dance in love right away, but gradually danced closer as we got to know each other.  Once we fall into a metaphor, we lack the imagination to get back up.

do-not-fall-in-love

Few of us have fallen in any serious way in real life, and if we did, it was likely a horrifying accident, not something we would wish for.  And if we’ve not just literally fallen, but fallen in something, it’s even worse.  What, other than love, can you fall in that’s not terrible? And why fall in love at all?  Even if I try to change the image, love is still, metaphorically, something to be in, a container, at best; an abyss, at worst.  But most of us pine to fall in love.  Sometimes it feels good to fall, as so many amusement park rides simulate.  And, in the words of Jeff Bridges’s character in Crazy Heart, “Sometimes falling feels like flying/For a little while.”

In some ways, though, the idea of the fall has shaped the views of our moral and mortal world.  Last semester, when I taught Paradise Lost, students were struck by the sadness, but also the hopefulness, of Adam and Eve’s fall, their expulsion from Eden.  Yes, the fall is bad.  But,as the Angel explains,

This having learnt, thou hast attained the sum
Of Wisdom; hope no higher, though all the Stars
Thou knew’st by name, and all th’ ethereal Powers,
All secrets of the deep, all Nature’s works,
Or works of God in Heav’n, Air, Earth, or Sea,
And all riches of this World enjoy’dst,
And all the rule, one Empire: only add
Deeds to thy knowledge answerable, add Faith,
Add Virtue, Patience, Temperance, add Love,
By name to come called Charity, the soul
Of all the rest: then wilt though not be loth
To leave this Paradise, but shalt possess
A paradise within thee, happier far.
(XII.575–587)

That’s precisely what’s better about fall than spring.  The happiness is internal, not just external.  it allows for paradise within.  Besides, you can’t have spring without fall, can’t regain paradise without losing, can’t love or sleep without falling, and you can’t fall in something that’s not already deep.  Spring—even Paradise—eschews fall’s depths.

The sunshine spring lovers love?  It’s carcinogenic.  The renewal of life? Life is a sexually transmitted disease with a 100% fatality rate.

Happy Fall!

Time: 60 minutes.


[1] And egg-laying bunnies. I shudder to remember the Cadbury Egg commercials showing a rabbit laying a chocolate egg.  KIDS: if you see this is real life, IT IS NOT CHOCOLATE.

Tagged , , , , , , ,

We Have Entered the Era of Un-

In culture, literature, and theory, the 1960s marked the beginning of postmodernism.  And quickly the prefix post- became the operative way of understanding the world: post-war, post-structuralism, post-colonialism, post-industrialism; then, post-human, post-Boomer, and post-punk; more recently, post-millennial and post-apocalyptic; and for a least a little while in 2008, post-partisan and post-racial.   (Many a postdoc has been devoted to developing post-anything.)  Post- became more than a prefix—it became a worldview, an epistemological category.

But what, students in my class on postmodern literature reasonably asked, can possibly come after postmodernism, or post- anything? More post. Post-postmodernism. [Shudder]. Post- is the prefix that devours itself, since it is always after, belated, still waiting, and deferred. Nothing can come after post-.

Nothing except, with apologies to Existentialism, a new kind of nothing.

Enter: Un-.

Un-, like post-, is not a word. Unlike other prefixes, however, like pre- or post-, or re- or un-’s near-relative, under-, un- does not describe, affix in time, suggest repetition, or, like mis- or mal-, even suggest that something is wrong.  Unlike with-, dis-, de-, counter-, anti-, or even the powerful non-, un- does not suggest opposition, working against.  Un- suggests more than reversal or opposite: it is negation, disappearance, taking out of existence.  And if post- described the world after about 1945, Un- describes the world from 2000, or maybe 2001, to the present. We are living in the era of Un-.

Now, I realize that lots of words began with Un- before 2000.  I used “unlike” twice in the last paragraph alone. But I used it as a preposition, “dissimilar from.”  On Facebook, unlike is a verb: if you click Like, and then decide that you don’t like that thing anymore, you can click Unlike and it will erase your Like. Since Facebook does not have a Dislike button, Unlike is as close as people can get.

But Unlike is as different from Dislike as unable to disable, unaffected to disaffected, unarranged to disarrange, unfortunate to disfortunate (which is sort of a word).  Which is to say, very different.  Both suggest opposition, but dis- implies an active opposition, expending energy to reverse.  Un- feels passive, a kind of vanishing—or worse, the suggestion that the thing never was in the first place.  When we Unfriend on Facebook, we do something we cannot do in real life or face to face, which is presumably why the word had to be recently invented. We don’t Unfriend corporeal people.  We just—what, exactly?  Stop being friends? Spend less time together? Drift apart? Or something stronger—not a drift but a rift.  A fight, a falling out.  We’re not on speaking terms anymore.  But not Unfriend.  We can only Unfollow online, on Facebook or Twitter.  We can’t Unfollow in person.  Unfriend and Unfollow seem etymologically and epistemologically close to Untouchable, with the implications of prohibition, exclusion, disappearance. Unclean.

Like many people who spend time at their keyboard, I have become reliant on Delete, on Backspace, on Undo.  When I knock down a glass and wish it would float back in a startling cinematic backwind, or misplace my book and want it to reappear, or say something that I want to take back, I can picture Ctrl Z clearly in my mind’s eye.  But it does not Undo.   Glasses do not unbreak; books are not unlost but rather must actively be found (without Ctrl F, either). Words that are unspoken were never spoken, not spoken and stricken.  We say, I take it back.  But the words cannot be unsaid.  Judges instruct juries to ignore testimony, but lawyers know that jurors cannot unhear. Judges cannot unstruct.  Traumatized viewers cannot unsee.

Do not try this in real life

And so Un- fails at complete erasure.  Like a palimpsest, Un- can’t help but leave traces of its former self behind.  The close reader can see what used to be there, the residue of virtual Friendship, the electronically unsettled path left behind after one has Followed, or been Followed.  And perhaps this failure is for the best.  The only thing more powerful than Un-’s fever dream of retroactive disappearance is that the wish cannot come true.  If anything, the electronic world that birthed the fantasy of Undo is the same one that never lets us scrub our online prints away.

Time: 55 minutes

P.S. Please Like and Follow this blog.

Tagged , , , , , , , , , , , ,

No One Knows What Manhood Is Yet No One Will Stop Writing about Manhood

Just as I planned to write on new books about manhood—Time’s Joel Stein and Man-Made: A Stupid Quest for Masculinity , and GQ’s Glenn O’Brian and How to Be A Man–The New York Times goes and publishes a magazine cover story on the same topic, “Who Wears the Pants in this Economy?” an excerpt from a forthcoming book by Hanna Rosin.

Manhood, it turns out, is a deceptively elusive subject.  If the obscure definition of obscenity is “I know it when I see it,” then the definition of masculinity is even vaguer. Taking the two books and Times article together, here is a definition of manhood: We don’t know it when we see it, we don’t see it when we know it, or we don’t know when we don’t see it.  And I thought Flight of the Concords had this all sorted out in “Think About It”: “What man?/ Which man?/ Who’s the man?/ When’s a man a man? What makes a man a man?/ Am I a man?/ Yes. Technically I am.”

(see 1:18)

Take Stein’s book.  Please. [rimshot] The High Concept is this: the new father of a boy, Stein fears that his effete, metrosexual lifestyle will not allow him to raise his boy to be a real man, so he attempts all of the most stereotypically manly activities he can imagine, one per chapter—essentially hanging around with other men like Marines, day traders, hunters, and ultimate fighters—in order to learn the lessons that he’d like to pass on. Call it The Year of Living Manfully. The result is sometimes funny—“When I played Dungeons & Dragons, I was never a fighter or an assassin; I was always a magic-user.  Even in my fantasy life, I was a nerd”— and just as often not funny: “I am no human resources expert, but I believe Great Point Capital might have a much easier time recruiting female employees if it didn’t feel so much like Rape Point Capital.” But to pull off the conceit, Stein is too accepting of standard out of the box masculinity, pretending that decades of academic research into gender—across fields of sociology, psychology, literature, and entire fields of gender studies—never happened.

I guess that could be OK—this book is clearly part humor, part AJ Jacobs-stolen stunt memoir. Except that Stein keeps defining himself as an “urban intellectual” seemingly without irony (I thought post-William F Buckley, the word “intellectual” was now officially an insult) and therefore in opposition of the kind of manly adventures he chronicles here.  What kind of intellectual is this juvenile?  OK, I take that one back. But what kind of intellectual appears to have read nothing on the subject of his book, including parenting books? And while Stein will intermittently bring up race, class, and his suburban Jewish upbringing on rare occasions, he seems not to think of manhood in sociological, political, or class terms, even as they clearly, inadvertently emerge that way. As a result, the book mostly ends up supporting stereotypes about masculinity—men don’t like to talk; men like to kill things and sleep outdoors—at his own self-deprecating expense, since he isn’t like this. But the stereotypes are also at the expense of exploring, developing, and  challenging—or, if it suited him,  defending—traditional conceptions of manhood. Stein begins the book believing that driving a fast car and firing a tank will make him more of a man, and concludes that, surprise, they have.  Self-consciously calling his book a “stupid quest” does not inoculate it from the charge that it is stupid. It is.  But that’s actually OK.  My problem is that it was never even a real quest at all.

Glenn O’Brien’s book seems at first as though it is exactly what Stein did not set out to write. Stein: “I’ve decided to make a list of tasks that I hope will turn me into a man. My list will not include anything I have ever read in GQ or Esquire: I will not learn to fold a pocket square, mix cocktails, build my triceps, look up word bespoke, or get the right haircut for my face shape. That’s being a dandy. My book could beat up that book.” But it turns out that O’Brien did not write that book either, not exactly.  While there are plenty of sections on shirts, drinks, and style—not to mention that O’Brien clearly celebrates dandyism—what O’Brien has done it construct a deft collection of essays on topics related to manhood in the 21st century, while at the same times suggesting that some aspects of manhood are, indeed, timeless and archetypical.

So despite pages riffing on ties, O’Brien is far more intellectual than Stein—and therefore does not ever need to call himself one—suggesting that “A gentleman is reason personified” and referring or alluding to Socrates, Emily Post, the religious concept of acedia, Brad Pitt, Muhammad, Rocky Marciano, Andy Warhol, and hundreds more, in a way that seems erudite rather than namedroppy or shoehorned in.  So nothing about tanks, but rather, a confident book of ideas that I don’t always agree with but respect. And respect is a word that Stein reserves for his new friends but not himself—or at least the fake funny-guy persona he tries to foist on the reader.

Meanwhile, I can’t help but think of the Mark Twain adage, that to the man with a hammer, everything looks like a nail. The Style Guy sees manhood in style.  In Stein’s book, rich men see manhood in money; martial artists in punching; hunters in hunting; ballplayers in playing ball; firefighters in fighting fires.  But what happens when they lose their hammer?

That’s where Hanna Rosin comes in, in The Times.  Her article is about men who have not only suffered the indignity of losing their jobs, but also of SEEING THEIR WIVES SUCCEED! Which is somehow salt on their wound, as opposed to, I don’t know, “Thanks, Wife, for saving my ass.” Quote after quote reinforces their sorrow: “Probably no one has had their wife move up the ladder as far as I’ve moved down,” says one; “We’re in the South,” Rosin quotes another. “A man needs a strong, macho job. He’s not going to be a schoolteacher or a legal secretary or some beauty-shop queen. He’s got to be a man.”  This is Stein stripped of all humor, purpose, and self-consciousness, manhood not as fodder for jokes but just fodder, or just a joke.

Of course, manhood’s perceived strength—which is, um, strength—is its weakness.  Part of Rosin’s point is that women feel less entitled to start at the top and are more flexible employees, and therefore are better suited to contemporary employment needs. Yet Rosin also misses that man’s rigidity means that her thesis is old news, destined to spark controversy before disappearing for another few years, when suddenly it is rediscovered, kind of like John Travolta.  Previously, in April 2003, the New York Times Magazine also published “Commute to Nowhere.”  with its thesis that “By the numbers, women have been hit as hard as men, but white-collar men tend to experience unemployment differently, organizational psychologists say. For most women, survival trumps ego; they simply adapt and find some job. For men, grappling with joblessness inevitably entails surrendering an idea of who they are — or who others thought they were.”

And in light of at least one other 2011 New York Times article, “The Gender Pay Gap by Industry,” maybe the problem of manhood is overrated to begin with: “Over all, women who worked full-time in wage and salary jobs had median weekly earnings of $657 in 2009. That’s 80 percent of what their male counterparts earned.”  Women are still only earning 80% of the pants.  They wear the shorts in the family.

In the end, if manhood can mean anything to anyone, then it doesn’t have any meaning at all.  In some ways, that would be a very good thing, especially to Rosin’s subjects.  I recently found out that Marlboro cigarettes, of all things, were originally marketed to women, pretty much proving that, at least in some arenas, gender is a total construct and fabrication with no intrinsic truth at all.  And that cigarettes’ flavor is whatever people believe it is, since the same ones are “mild” for women and full of “flavor” for men.

But in other ways, I’d like to see manhood stick around.  For all the emphasis on the South, the men of Rosin’s Times piece don’t know the first rule of manhood, inspirited by Rhett Butler: a man doesn’t give a damn about what anyone thinks about his manhood.

And personally, I’d like to think that I do know it when I see it.  And technically I am.

Time: 90 minutes. And I had to force myself to stop.

Tagged , , , , , , , , , , ,

“Call Me Maybe”: The Deconstruction

Carly Rae Jepson’s “Call Me Maybe” is the musical embodiment of what critical theorist Jacque Derrida refers to as “différance.”  Unlike “Call Me,” the previous hit song by Blondie of almost the same name, “Call Me Maybe” throws the initial utterance, the command to “call me,” into question, even forces it under erasure, through the retroactive emendation of final ambiguity, “maybe”; “call me” lies simultaneously with its very negation.  Yet the call itself has not been placed, and in fact exists only in the world of the Imaginary—that which, in Lacan’s parsing, by definition we cannot know. The call forever remains hypothetical, subjunctive, unrealized: deferred.  As Derrida explains, “the relationship to the present, the reference to a present reality, to a being—are always deferred.”

At the same time, the title’s syntactical construction posits its speaker, “me,” in the object position, the patriarchal relegation of the feminine, even while the speaker simultaneously issues the grammatical imperative, “[You] call,” (re)positioning her in symbolic authority.  Derrida suggests that “Différance is the systematic play of differences, of the traces of differences … the simultaneously active and passive…”—just as the speaker of “Call Me Maybe” implies as well.   Further,  the lyric sheet reads “Call me, maybe,” with the comma to separate the command from the adverb, suggesting a heightened claim of ambiguity.  Yet the title, “Call Me Maybe,” with its elided comma and conventional titular capitalization, refigures its meaning entirely: the statement employs the dative declension, echoing literature’s most famous manifestation of this form, Herman Melville’s opening line to Moby-Dick, “Call me Ishmael.”  She is commanding the listener that she should herself be called Maybe, a name that is Not.

The speaker’s utterance, but also the speaker herself, has thus been rendered indefinite, unknowable, and differed ad infinitum.  The title must be read simultaneously as “Call me, maybe,” “Call Me Maybe,” “Call me, maybe,” if the call is never placed, or “Call me, maybe” if it is. We therefore find Carly Rae Jepson in the rhetorical situation of Derrida translator Gayatri Chakrovorty Spivak.  In her Translator’s Preface to Jacques Derrida’s Of Grammatology, Spivak writes that her “predicament is [that of being] ‘under erasure.’  This is to write a word, cross it out, and then print both the word and deletion.  (Since the word is inaccurate, it is crossed out.  Since it is necessary, it remains legible.)”

While I have been using the gender specific pronoun “she” to refer to the speaker, since Carly Rae Jepson’s voice, clothing, and sex all code her as “heterosexual female,” the gender identity and sexual orientation of the speaker are in fact ambiguous as well. The opening line, “I threw a wish in the well/Don’t ask me, I’ll never tell” recall the famous “Don’t ask, don’t tell” law established under the Clinton presidency preventing gay and lesbian solders from revealing their sexual orientation, under the risk military discharge.  The ending, or “punch line,” of the “Call Me Maybe” music video introduces the possibility that what we had been viewing all along is not a heteronormative enactment of adolescent dating rituals but rather their subversion, playing upon the complacent viewer’s culturally rigid assumptions of masculinity.

Indeed, the song not only embodies différance; it embraces paradox.  The repeated last line to each verse, “And now you’re in my way,” as well as the reiterated “Where you think you’re goin’, baby?” imply the threat of male coercion despite the feminine vocal delivery.  And the final bridge section, repeating  “Before you came into my life/ I missed you so bad” like a mantra, becomes a Zen kōan, reflecting upon a sublime yet uncanny sense of temporal disconnect.  The notion that one can miss something that has not yet been experienced recalls haiku poet Matsuo Bashō, who writes of the ways in which one can long for an interior, emotionally subjective construction of life even at the expense of its own reality:

Even in Kyōto—
hearing the cuckoo’s cry—
I long for Kyōto

The sense of différance set forward by the lyrics is further augmented by the music behind the chorus. The standard popular song follows a I-IV-V-I pattern: firmly establishing its chord progression with the I cord, developing tension through the IV and V chords, and then resolving the musical conflict by reestablishing the root or alternately moving to the root’s relative minor.  In “Call Me Maybe”’s key of G major, however, the chorus chords move back and forth between C (the IV) and D (the V) without ever returning to G (the I) or moving on to E minor, never resolving, a musical manifestation of différance itself, even throughout the end of the song, which, unlike the conventional fade-out, ends in a pitchshifitng downward spiral, deferring even the idea of a musical conclusion.

The final result of this radical indeterminacy is that “Call Me Maybe” is a musical Mona Lisa, rendering itself a cultural cipher, a tabula rasa upon which any reader may impose meaning; with over 222,500,000 views on YouTube, its video is a floating signifier capable of accommodating virtually any viewer.   As such, the Internet is inundated with “Call Me Maybe” memes, each imagining a different, resolved signified of the song that, taken together, negate each other, paradoxically denying any such certainty.

And so many more

Maybe.

Time: 75 minutes

Derrida quotations from “Interview with Julia Kristeva” in Positions (University of Chicago P, 1981)

Tagged , , , , , , , , , , ,

The New School Year! Or, Despair is Not Just for Students; Or, Two Cheers for Uncertainty

Dickens’ opening lines of A Tale of Two Cities—the famous “best of times; worst of times”—sometimes at risk of turning into a cliché, instead seems truer all the time.  I can listen to any song ever recorded and ingest better wines, cheeses, fruit, and fish than all the kings of yesteryear, even as the world is plagued by more apocalyptic scenarios that I can recount here, from scorched earth to possible pandemics to rogue nukes to real-life zombies to the end of year tax cliff.

In keeping, this best of times/worst of times dichotomy also works for the opening of the college term. For students: friends! College life! And best of all: possibilities.  And the worst, as they often discover after a class or two: the pressure, the exhaustion, the work. College would be so much fun if not for the classes.

I too relish the energy and opportunity of the beginning of the school year.  But I also feel doubt, even dread.  Unlike for students, the angst isn’t about work, which I love.  It’s existential. Does teaching students to read, write, and think make any difference in the world at all?  Americans hardly read books anymore; schools are teaching less and less fiction and creative writing; writers can’t stop plagiarizing anyway. So why bother? The majority calmly play Angry Birds while Rome burns, but is teaching writing and literature—or, worse, writing or blogging itself—any better, or just a more painful and equally pointless endeavor?

I didn’t always feel this way. If anything, ironically I worry more now that I have more experience and am, arguably, at the top of my teaching game. Unlike during my first few years, I no longer feel like an imposter, and unlike future decades from now, when I’ll remember the good ole days of online course management systems, discussion boards, and blogs before it all went downhill with the introduction of cerebral cortex implants in 2032, I still know what I’m doing.

Maybe it’s me.  An article in The Chronicle of Higher Education last spring suggested that mid-career professors were less happy than those who were starting out, despite better pay and job security: “The survey shows that on most key measures, professors are actually happier while working toward tenure than they are once they’ve earned it.”  This reversal calls for more clichés: journey not destination, be careful what you wish for, etc etc etc.

But in another sense, this dissatisfaction is a narrative problem as well: what do you do after you’ve reached the end?  I am applying for my final promotion this year, to what is commonly known as full professor, and after that, despite that I’m on the early side of midlife, I have nowhere left to go professionally. Except, I suppose, down.

Or maybe: it’s OK.

Not the problems, but the doubt, the ambivalence, the conflict.  In addition to more doubts, I feel a concomitant skepticism of the usual virtues of certainty and decisiveness.  It appalls me that the dictionary lists “weakness” as an antonym of “determination,” and that, say, Hamlet’s doubt is often taught as his tragic flaw.  If anything, the seven deadly sins get it right: pride is far more dangerous than uncertainty, since it is through doubt, even vacillation, that we grow, reflect, change, and learn.  If anything, Hamlet’s real flaw was the same as in the ancient tragedies: his hubris.  He believed that the world revolved around him, and that he could treat those closest to him, especially Ophelia, with caprice and contempt, BECAUSE HE WAS WRONGED.

The little voice inside that always asks, “Why should students have to do this?” is my students’ best advocate, so that when they think—or ask—the same question, they’ll learn that I do not treat the question casually or cynically.  It’s the best question I can think of.

One of my little pleasures is that the word “Commencement” means beginning; it is used to signal the opening of the term, but it is also now synonymous with completing one’s education, graduation, or what feels like the end of something for students.  Yet once they graduate, most jobs are about the same in September as they are in January or April, and the narrative wonder that’s built into the school year disappears.  But I cherish it, so that I always have another start, and a new conclusion that begets a new start and another finale, to look forward to. Students—and teachers—get to experience life with a series of beginnings and endings built in.  Everyone else receives only one ending.

At the risk of sounding trite, students should read because it’s fun, and a different, deeper, better, even more lasting kind of fun than Fruit Ninja.  And that sometimes, it also happens to be beautiful, or ugly, or compelling or—and I use this word despite doubt, skepticism, and ambivalence—true.

Although I reserve the right to change my mind on that.

Time: 60 minutes. Back on schedule.

Tagged , , , ,

Are There Two Kinds of People in the World?

Who are we to call him Monster?

 

It was bad enough to wonder whether I was a man or a Muppet.  Now I spent all weekend worried that I was also the wrong kind of Muppet.

I blame Dahlia Lithwick, who wrote that there are two types of Muppets, “chaos Muppets” and “order Muppets,” and that, by extension, “every living human can be classified according to one simple metric: Every one of us is either a Chaos Muppet or an Order Muppet.” 

Lithwick elaborates:

Chaos Muppets are out-of-control, emotional, volatile. They tend toward the blue and fuzzy. They make their way through life in a swirling maelstrom of food crumbs, small flaming objects, and the letter C. Cookie Monster, Ernie, Grover, Gonzo, Dr. Bunsen Honeydew and—paradigmatically—Animal, are all Chaos Muppets. Zelda Fitzgerald was a Chaos Muppet. So, I must tell you, is Justice Stephen Breyer.

Order Muppets—and I’m thinking about Bert, Scooter, Sam the Eagle, Kermit the Frog, and the blue guy who is perennially harassed by Grover at restaurants (the Order Muppet Everyman)—tend to be neurotic, highly regimented, averse to surprises and may sport monstrously large eyebrows. They sometimes resent the responsibility of the world weighing on their felt shoulders, but they secretly revel in the knowledge that they keep the show running. Your first grade teacher was probably an Order Muppet. So is Chief Justice John Roberts. […] It’s simply the case that the key to a happy marriage, a well-functioning family, and a productive place of work lies in carefully calibrating the ratio of Chaos Muppets to Order Muppets within any closed system.

Two things become pretty clear: 1) despite her ironic implications (”This is really just me having fun,” she protests a little too strongly; filing under “Dubious and Far-fetched ideas”), Lithwick takes her binary system pretty seriously; and 2) despite that “It’s not that any one type of Muppet is inherently better than the other,” she clearly prefers chaos Muppets.  So do I.  And, I’ll add, so does everyone.  Chaos Muppets have all the fun, and order Muppets are the straight men, the ones who get flabbergasted and frustrated and freak out while muted trumpets go “Wha wha whaaa” at their expense.

Which is why I found it so disturbing to realize, as I was obsessively vacuuming the living room, that I was clearly an order Muppet.  Even worse was the realization that my wife is also an order Muppet, even as Lithwick takes pains suggest that her classification system is crucial for life partners: “Order Muppets tend to pick Chaos Muppets for their life partners, cookies notwithstanding. Thus, if you’re in a long-term relationship with a Chaos Muppet, there’s a pretty good chance you’re Bert. If you’re married to an Order Muppet, you may well be the Swedish Chef. And by all that is holy, don’t marry your same type if you can help it. That’s where Baby Elmos come from.” No word on what becomes of the children of two order Muppets.

I didn’t feel this way after reading Heather Havrilesky’s “Steve Jobs: Vampire. Bill Gates: Zombie”  in the New York Times Magazine last October, which suggested that “Vampires and zombies seem to reside at the polarities of our culture, telling us (almost) everything we need to know about (almost) everything in between.”  It was clear to me that I was a vampire, and that the piece, like Lithwick’s, wanted us to feel as though the writer is disinterested in the distinction when really vampires come off far cooler.

As Havrilesky puts it,

Vampires are solitary and antisocial and sleep in the ground. Zombies are extroverts, hanging out in big, rowdy clusters, moaning and shrieking, and apparently never sleeping at all.

Why do these sound like people I know? Maybe because these two approaches to being undead mirror two very different approaches to being alive. You’re either a vampire or a zombie, and it’s easy to tell which one.

The vampires are the narcissists, the artists, the experts, the loners: moody bartenders, surgeons, songwriters, lonely sculptors, entrepreneurial workaholics, neurotic novelists, aspiring filmmakers, stock traders, philosophy professors. The zombies are the collaborators, the leaders, the fanatics and obsessives: I.T. guys, policy wonks, comic-book collectors, historians, committee heads, lawyers, teachers, politicians, Frisbee-golf enthusiasts.

“Sexy!”–New York Times

This is all meant to be fun and funny.  But we really are required to place ourselves in mutually exclusive binary categories all the time.  There’s Male/Female, of course, and even if biology or culture weren’t forcing our hand, our English pronouns leave us no gray area. (“Ze” is not a viable option yet.)  There is the dichotomy that still allows for, insists on, legal segregation: smoker and nonsmoker.  There is the dichotomy that no one thinks about but may be the most intrinsically important one of all: to borrow from Sharon Olds’s book of poems, The Dead and the Living.  There was the ancient Greek distinction, between themselves (Greeks) and barbarians (everyone except Greeks). That dichotomy was originally related to language, but like chaos Muppets/order Muppets and vampires/zombies, you know which side you’d rather be on.    

In The Good, The Bad, and The Ugly, Blondie (Clint Eastwood) says, “There are two kinds of people in the world: those with loaded guns, and those who dig.”

Tuco, though, has his own ideas: “There are two kinds of people in the world, my friend: Those with a rope around the neck, and the people who have the job of doing the cutting.”  They’re the same two groups for both men, but sometimes the ones who carry loaded guns wind up with ropes around their necks as well. You have to wonder, though, about a movie whose recurring motif is “two kinds of people” when its title clearly suggests that there are three.

Yet in many ways, these writers aren’t so different from the psychologists who want to squeeze all of humanity into two boxes, despite that context and mood probably influence our actions more than a temperament derived from multiple choice testing: extraversion or introversion; sensing or intuition; thinking or feeling, judgment or perception.   Nietzsche knew better.  He didn’t think in terms of two types of people, but rather two human impulses, as anthropomorphized by the Greek gods Apollo and Dionysius.  Clearly, Apollo is an order Muppet and a Vampire, while Dionysius is a chaos Muppet and a Zombie.  But as humans, we are both and neither, instead the product of constantly conflicting beliefs, moods, attachments, and desires.  Putting people into simplistic categories has the potential to explain as well as dangerously simplify the world. As writer Tom Robbins put it, “There are two kinds of people in this world: Those who believe there are two kinds of people in this world and those who are smart enough to know better.”

So now I know better.  

Time: one hour.

Tagged , , , , , , , , , , , ,

Hunger Games are from Venus, Hunger Artists are from Mars

Some assembly required. Batteries not included.

Just in time for the movie, if two years behind the teens, I read The Hunger Games.  But even though he’s been dead for almost ninety years, Franz Kafka beat me to it.  In 1922, just a few years before he died, Kafka published the short story A Hunger Artist, a weirdly candid but unsurprisingly depressing mediation on a man who starves himself for the entertainment of others.  Although the story was published ninety years ago, it is already nostalgic, looking back on the golden era of starvation artists, a real-life phenomenon where men would live in cages, their wasting public for gawking spectacle. As the story opens, “During these last decades the interest in professional fasting has markedly diminished. It used to pay very well to stage such great performances under one’s own management, but today that is quite impossible.” 

As usual with Kafka, it’s nearly impossible to easily interpret, although at least no one wakes up as a cockroach.  Is the story autobiographical and symbolic, with emphasis on the word “artist”: starving artists as hunger artists, sacrificing themselves for their art?  Is the hunger artist a Christian martyr or Christ himself, sacrificing his body for the seeming benefit of others, even if those others don’t know it? Is the story sincere or ironic—does Kafka really think that slow starvation is a great performance?  Is the hunger artist a victim of a vicious society or the perpetuator of a con, making a living literally doing nothing?  Is he misunderstood, as he believes, or does he misunderstand himself?  Kafka seems to want to story to seem spiritual and existential, but in our contemporary culture of eating disorders and reality television, he now seems anorexic and narcissistic, equally food- and attention starved—psychiatrically disordered, rather than acetic, spiritual, or even alienated.    The hunger artist would have loved the present.   

So let’s cut to the present.  The Hunger Games, the first major post-Harry Potter young adult lit phenomenon, seems the titular heir to Kafka’s hungry hungry hero.  Yet I had some major qualms about the book—at least until I was more than halfway through it.  Like Hunger Artist, Hunger Games is also nostalgic, not because the days of starvation are behind them but because they are ahead. In this futuristic, totalitarian dystopia—like there’s any other kind?—America is now Panem, but not the friendly skies: a weird amalgam of technological advancement amidst an overall feudal, semi-agrarian society. 

Our futuristic dystopian overlords, apparently.

In order to keep the story’s twelve districts in line and circumvent rebellion, the government, such as it is, uses a lottery to select two contestants—Tributes, one boy and one girl—from each district, elevates them to celebrity status, has them model haute couture and eat haute cuisine, makes them appear on TMZ, then televises their gory fight to the death, with a single winner rewarded with food and other valuable prizes.    The good news is that this set up keeps ex-contestants from robbing convenience stores or starring in pornography once the show is over.  The bad news is that it doesn’t make much literal or political sense.  We like our ultimate fighting and our reality stars separate, not that I’d be surprised by Kickboxing with the Kardashians.  But time tested, old fashioned slaughter, secret prisons, pograms, public impalement, and killing fields are far more cost effective for the frugal, discerning despot.

The influences show everywhere: Shirley Jackson’s The Lottery, obviously, Stephen King’s Running Man and The Long Walk, an episode of Justice League called War World, which itself borrowed from Spartacus, and every battle royale ever written, from Koushun Takami to Ralph Ellison. Plus, the writing seems equally prosaic. While it’s ostensibly the first person POV of Katniss Everdeen, our protagonist (and therefore, we quickly surmise, winner of the Games, a kind of built in spoiler), the language is often so clichéd and dry that it reads more like a book report about some other, better written novel that Katniss read and is telling us about secondhand.

Yet somehow, even with this ticker of criticism running through my head as I read, I found myself enjoying the book more and more, until by the end, none of the problems mattered, any more than the unlikelihood of talking bears or the existential crisis of wishes in a fairy tale. 

Even more than what turns out the be the novel’s narrative triumph—that is, somehow creating suspense even when the ending is predestined; somehow making interesting a violent snuff film of a bunch of kids killing each other—is what the novel does for gender.  It may seem, in our post-Aliens and Terminator world, that female heroes are at last the norm, but they’re not, not really.  Katniss is simply herself, and who she is is tough, but not particularly smart; self-preserving more than altruistic, even if, like Kafka’s hunger artist, she seems to sacrifice herself for her sister Prim and despite that she does rue Rue; skilled at traditionally masculine tasks like hunting; and lucky, but the kind of lucky that comes after the disaster of living in Panem and winding up in the hunger games.  In other words, she’s far more like Harry Potter than Hermione Granger, more Peter Pevensie than Susan, who does receive a bow and arrow from Father Christmas but is admonished to use it only “in great need…for I do not mean for you to fight in the battle.”  Girls are supposed to be the smart ones, the sisters, the girlfriends, the blank slates, the protected, the supporting characters. Katniss is not any of those things.  She’s better. Yet at the same time, the book never seems to have any gender agenda.

What’s more interesting, though, is her contrast with the male District 12 tribute, Peeta, whose name sounds feminine and reminiscent of bread (he’s the baker’s son), who protects himself in the hunger games by painting himself in camouflage and hiding, and whose sensitive romantic dumb love for Katniss could give Bella a run for her hanky.  This alone would be an interesting gender reversal. But the book does more.  After an improvised rule change forces Katniss and Peeta to team up, Peeta’s injuries make him more of a liability than an asset for Katniss. But not only does she have to protect him, she needs to protect his male ego, so that as she’s protecting him, she has to make him believe that he’s protecting her.  Edward, Jacob, and all those other guys just have to protect, without any self-consciousness and subterfuge.  And in the end, [yes, yes spoilers, although why you’re reading this if you haven’t read The Hunger Games is a mystery to me] when Peeta and Katniss both live, we discover that Peeta’s leg has been amputated.  He’s been saved by a girl like a hundred times, and then symbolically castrated.  And all he wants is looooove. 

I remember in my first year of college reading a super politically correct textbook called Racism and Sexism.  I no longer have it, so I can’t double check this (although I never sold books back so it must still be on my old bookshelf in my parents’ house).   But in it I remember a thought experiment for guys, imagining that every President, nearly every major world leader, nearly every famous scientist, nearly every writer until only a hundred years ago, etc etc etc, was a woman, and how women must feel about the real world.  I got it then, of course.  But I think I get it much better now, thanks to Katniss and The Hunger Games.  In the back of girls’ minds, there had to be a little nagging that the girl is always a Wendy but the boy gets to be the Peter Pan.  Yet when kids read Hunger Games today, they’re not going to think about Kafka, or Shirley Jackson, or the occasional clichéd language.  They’re not even going to notice that Katniss stands almost alone as a realized yet nonchalant female hero.  They’re just going to take the book as it is, and Katniss for herself. 

For a story in the dystopian future, it makes me very optimistic.  And the only Kafkaesque hunger the fans feel is for the next book. 

Time: a little over an hour

Jesse Kavadlo

Coming soon: from Wall-E to Hunger Games to Gone to Uglies: what’s with all the dystopia for kids?   

UPDATE: Here’s that post: https://jessekavadlo.wordpress.com/2012/03/26/bedtime-stories-after-the-end-of-the-world-ages-12-and-under/

Tagged , , , , , , , , , , , , , ,

VH1’s Metal Evolution as Interpreted by Theorists other than Charles Darwin

[Previous blog on VH1 and heavy metal]

VH1 concluded the first season, eleven episodes, of Sam Dunn’s documentary on heavy metal, Metal Evolution.  The thing that impresses me most, even more than the obvious time, money, energy, thought, and love that went into it, is the thesis: Dunn is actually true to the title, reading the history of metal as a gradual process by which the music changed into different forms and subgenres over four decades.  The introduction (excerpted in the clip below) shows Dunn hard at work constructing his diagram of categories and hand-lettered band-name logos, using architect-grade pens, an X-acto knife, pushpins, and string, so that the resultant chart is a meticulous assemblage worthy of a lepidopterist,  cartographer, or serial killer. As he works, the camera flashes to a bust of Charles Darwin, and then later to a bookshelf highlighting The Origin of the Species.  Dunn clearly sees metal as deserving of a hagiographic, Ken Burns-style documentary, even as metal, unlike Burns’s jazz and baseball, is not a simple slice of Americana; like an anthropologist, Dunn traverses the globe, frequenting Britain but also hitting Germany, Denmark, Canada, Brazil, and more, all to catalogue the comprehensive metal diaspora.

[Clip: Ad for Metal Evolution series; about 1 minute in, turns into clip of anti-metal diatribe for some reason. Ah, Youtube]

Yet [channeling Carrie Bradshaw] I couldn’t help but wonder: what if the series went on beyond Darwin? [Smiling for not saying “evolve.”] 

Metal Materialism

 

I'm a Marxist. A Groucho Marxist.

Dunn uses the image of evolution to suggest change, but it’s clear that it’s not natural selection as much as the unnatural, invisible hand of the marketplace:  the 1960s and early 1970s are presented as a golden age of metal, only to lead to a bloated, decadent phase of arena rock in the late 70s. Which then led to the energized, revitalized New Wave of British Heavy Metal (NWoBHM) 🙂  Which led to late 1980s glam excess and languor 😦  Which led to deeper, darker thrash 🙂  Which led to back-to-basics, punk-influenced grunge (:S [confused face]) Which led to Nu Metal (first 🙂, with Korn, then 😦, with Limp Bizkit and Linkin Park, with spelling 😦 the whole time).  In each case, it’s not exactly that the music got old as much as the target market did—record companies were always on the lookout to find the next big seller for the next generation, happy to dump last year’s act in favor of a new flavor, only to dump them, ad infinitum.

But it’s not just market fluctuation as much as a deliberate assimilation of subversion.  Hard rock, then metal, then thrash, then grunge, are systematically stripmined of their rebelliousness; the very thing that in one year makes it dangerous in the next makes it a hot commodity.  Venture vulture capitalism not only absorbs the marginal into its mainstream; it profits from packaging and selling rebellion right back to the teens who invented it, until it’s all gone.  Then it moves on to the next form. This is not evolution as much as a business cycle, or, if you’re thinking generously Hegalian, a series of dialectical movements between conservatism and creativity, reformations and counter-reformations.  

Metal Poststructuralism

Don't be so Saussure

But what about the episodes I didn’t mention above, on Shock Metal, Power Metal, and Progressive Metal? They fall outside—or maybe side by side—Dunn’s partially chronological approach, a kind of concurrent evolution, so that each of these three episodes starts over again in the 60s, even as the first eight episodes were working their way closer to the present.  We can think of metal, then, in Roman Jakobson’s terms: syntagmatic—linear, forward moving, evolving, chronological, narrative—as well as paradigmatic—vertical, categorical, thematic, metaphorical.  Seeing metal as moving from roots to early metal to NWoBHM to glam to thrash to grunge to Nu metal is syntagmatic; seeing the previous episodes as representing the traditional narrative of metal with outliers in Shock, Power, and Prog is paradigmatic.   

Alternately, we can see all of heavy metal as a language system—the langue of heavy metal always consisting of loud, distorted guitars, hard-hitting drums, extreme vocals (whether screaming, high-range, guttural, or Cookie Monster), and rebellious attitude; the parole of metal comes from the specific utterances and subgenres.  The reason your grandma (or a nonfan) can’t tell the difference between any of these episodes is because they’re not native speakers of metal—they recognize only the langue but cannot decipher the particulars of the parole.

Metal Patriarchy

I would not even think about putting a funny caption here

Dunn in general is not looking at metal’s faults.  Fair enough. It’s his show.  Yet the glaring fact is that, over eleven hours and interviews with hundreds of musicians, producers, journalists, and academics, I counted only three women: a manager, a professor, and Melissa Auf der Maur, bassist with Hole and other groups. (I may have missed someone, I suppose). 

Maybe it’s just a numbers game—metal bands are mostly male.  But consider one of Dunn’s very un-anthropological forays into complaint: he is very clear about his dislike of glam metal and seems only to include it out of some fanatical completist’s OCD.  And why does he dislike glam?  It seems, in part, because he sees the groups as feminine, wearing makeup and spandex, although, again, Grandma would see most of these groups as effeminate.  Ugly androgyny and makeup a la Alice Cooper and Marilyn Manson, who even assume women’s names, is OK, but not stage makeup or names like Rikki Rockett.  And beyond looking like women—or, arguably, caring about their looks at all—what is glam’s other serious violation? It appealed to—GIRLS!  In fact, the one thing that all of Dunn’s defective eras in metal share—including his open disdain of Linkin Park—is that they had a significant number of female fans.  Dunn’s metal shop is a boy’s club.

(Not that glam isn’t also, paradoxically, a low point in lyrical misogyny.  Dunn is not particularly interested in lyrics anyway.  And unlike the other metal genres, glam has at least discovered girls in the first place.) 

Metal Heliocentrism

Revolution Number 9

Dunn seems to see the 60s as the Big Bang of metal creativity.  And the cosmological model may be better than the evolutionary one, as evolution implies not just change but change into a better form.  For Dunn, it’s clear that the subjects of his previous documentaries, Iron Maiden and Rush, represent the sun around which the other bands and genres revolve.  The introduction plays Maiden’s The Trooper, and these two groups still seem absolutely central to Dunn’s metal universe, rather than mere transitional stages in a larger evolutionary process of species improvement. 

Metal Psychoanalysis

Sometimes a circular saw codpiece is just a circular saw codpiece. Oh, wait. No it's not.

If Dunn can use Darwin and I include Marx and Copernicus, it’s only fitting that I end with the other world-changing thinker, Freud.  The introduction also flashes briefly to photos of Dunn’s childhood and his college degrees on the wall.  It’s hard to wonder whether this whole documentary filmmaker gig isn’t a chance to meet the idols of his youth—and, in some oedipal sense, surpass them.  Many of the former stars are now aging, overweight, bald, and way, way past their era of fame.  Dunn is in charge now, calling the shots and asking the questions, controlling—creating—the metal narrative.  And at what must be a height of about 6’5”, Dunn again and again towers over the rock stars.  The star-struck child returns, and this time he is the symbolic adult.   Power metal indeed. 

Forget metal evolution—Dunn has crafted himself as metal’s Intelligent Designer.

Time: Yeah, I’m over an hour on this one. Yeah.

 

Jesse Kavadlo

UPDATE 2/15/12: Read the follow-up to the part that got people talking: Women and/or Rock.

Tagged , , , , , , , , , , , , , , , , , , , , ,

Game Over: When Bad Things Happen to Good Videogame Characters

Death by a thousand pixels

Two nights ago, I noticed that my boys, ages 10 and 13, looked—there is no other word for it—depressed.  Two weeks ago, I wrote about their obsession with/addiction to Legend of Zelda: Skyward Sword, including this: “for all the seeming fantasy, what the game—most games?—embodies are the very same strictures surrounding American school and work life.  Playing the game must be fun, too, I guess, but the real joy seems to be advancing to the next level—only to work toward surpassing that one, ad infinitum.”  But they didn’t look happy now.  My younger son should have been especially happy, because my older son had helped him beat a tough part, much to my chagrin—I’ve told them repeatedly that they should not play each other’s turns or games, since the playing, not the winning, was the point.  You wouldn’t ask someone to eat your ice cream for you.  They persisted anyway.

But now, they weren’t down because they had lost.

They were down because they won. It turns out that they beat the game. 

And with that victory, a kind of defeat: my doctorate of philosophy calls for a diagnosis of Existential Crisis, one that usually doesn’t set in for another few years, the nagging, gnawing, corrosive question that sets in at adolescence and, in some cases, never ceases: Is That All There Is?

It turns out that once you get to the last level, beat the last villain (in video game parlance, “Boss,” which seems weirdly Marxist to me), and rescue Zelda, the credits roll (Dear Fellow Old People: video games have credits), and play simply starts over at the beginning again. 

I asked them: what did you think would happen?  The point of the game was, as always, to kill monsters, beat bosses, acquire money (“Rupees,” which seems weirdly Asian Subcontinent), and move one level closer to finding Zelda.  It couldn’t go on forever, could it?  Did they think victory would reveal a secret code for a secret club or secret game? That a crisp $20 bill would pop out of the Wii? No, but—and here I paraphrase—they didn’t think that winning the game would feel so much like losing it.  Not just emotionally—really, all that happens after you win is that you go back to where you started, same as when you lose.

For all the scholars who suggest that video games are texts ripe for analysis, or that they even surpass more conventional narratives like stories thanks to their interactivity and player control, the end of the video game seems very different to me from the ending of a story.  As Walter Benjamin says in “The Storyteller,” readers intuitively understand all of life through the end of the story, which represents a kind of death, or through the actual death of a character:

The nature of the character in a novel cannot be presented any better than is done in this statement, which says that the “meaning” of his life is revealed only in his death. But the reader of a novel actually does look for human beings from whom he derives the “meaning of life.” Therefore he must, no matter what, know in advance that he will share their experience of death: if need be their figurative death—the end of the novel—but preferably their actual one. How do the characters make him understand that death is already waiting for them—a very definite death and at a very definite place? That is the question which feeds the reader’s consuming interest in the events of the novel.

In other words, as human beings we can never understand the full significance of our own lives, because we must live them, from our perspective, and can’t reflect on our own ending, because we’re, ya know, dead.  But we can contemplate the full life, objectively, of a fictional character, because the beginning and end of the story delineate the full beginning and end of their existence.  And so through fiction—the figurative deaths that are stories and the more real but still fictional deaths of characters, we may understand something big—Death!—that, by its very nature, eludes our grasp, and therefore we may take comfort. As Benjamin concludes, “What draws the reader to the novel is the hope of warming his shivering life with a death he reads about.” It’s uplifting.  Really.  So we think that we’re sad when our favorite characters die or our favorite stories end, but we also, on another level, feel good, or, if you’re Aristotle, experience catharsis, a purging of the bad emotions, once you’re through.

Or, as Frank Kermode understood it, narrative endings are not only dress rehearsals for death, but they are inextricably linked to our apocalyptic sensibilities: “Fictions,” Kermode says, “whose ends are consonant with origins satisfy our needs.”  The conventions of story itself dictate a beginning and an ending; for every “Once upon a time,” a “Happily ever after.” He goes on to suggest that “one has to think of an ordered series of events which ends, not in a great New Year, but in a final Sabbath.”  Or a Black Sabbath, if you’re not feeling particularly rapturous.  Kermode relates the endings of all stories to the endings of all things: narrative endings as death, but also death as a narrative ending, “the End is a fact of life and a fact of the imagination.”

But video games seem not to provide Benjamin’s comfort, Aristotle’s catharsis, or Kermode’s closure at all.   There is no Once Upon a Time or Happily Ever After, only the grim, relentless Middle—just like our own real lives.  As I wrote in the other blog, main character Link looks and seems a lot like Peter Pan. But it’s not just the pointy ears and pointy weapons, the green clothes, or the shock of hair.  Like all video game characters, and like Peter Pan, Link is, for all intents and purposes, immortal and eternally youthful.  You could make the same case, I guess, for all fictional characters—that they revert to being alive and young when you start the book or movie again.  But that’s symbolic.  Thanks to endless “lives”—the word gamers use—and concomitant reincarnation (a word no one uses) with each reset or replay, Link lives, and dies, again and again and again.  As a father, I find no sentence weighs heavier on my heart than when one of the boys tells me, when their game time is over, that “I’ll just play until I die.”  He’d like that, I suppose.  The shift to first person—“I” die, not “Link dies” or even “my game ends”—makes clear that the games are about defying death, but they also focus relentlessly, discordantly, on death itself.

You thought you had it rough?

But if Link cannot ever die, if there is no final level—since the thing resets ad infinitum—no sense of an ending, then it feels like there is also no point.  The Onion, as always, gets it hilariously right: “Video-Game Character Wondering Why Heartless God Always Chooses ‘Continue’”:  “ORANGEBURG, SC–Solid Snake, tactical-espionage expert and star of PlayStation’s ‘Metal Gear Solid,’ questioned the nature of the universe Monday when, moments after his 11th death in two hours, a cruel God forced him to ‘Continue’ his earthly toil and suffering.”  In the end, “God,” of course, is revealed to be “Orangeburg 11-year-old Brandon MacElwee,” who “offered no comment on His greater plan for Snake, saying He was ‘too busy trying to get to the part with the knife-throwing Russian girl.’” 

But players realize that they are not gods, or God, and that the never-ending levels and never-ending deaths in video games provide a different, cautionary lesson than those in stories: the ironic moral that there is more to life than acquiring points and money, more to existence than merely getting to the next level.  And I said this to the boys, concluding that “this is why I don’t let you play the hard parts for each other.  All you’re doing is speeding up the end, and it’s the playing  itself that’s supposed to be the fun part.” 

With that, my ten-year old looked at me, eyes bright and wide, and said, “I understand now.”

Time: It looked like I was gonna finish in 50 minutes, but then I decided I wanted to find the Benjamin and Kermode quotes that you probably didn’t read anyway, which took me overtime to 75 minutes.  I’ll finish faster the next time I play.

Tagged , , , , , , , , , , ,

Don DeLillo is Not Dead

Also: Not Dead Yet

While it seems impossible to believe, some people don’t know who Don DeLillo is; or, as I say to students, he’s the most famous author they’ve never heard of.[i]   And many of those people, including my non-academic acquaintances—yes, I have some—presume that Don DeLillo is dead.  They’re surprised that he’s not.

Their assumption raises a few interesting problems for teachers and scholars of living authors.  The first is the notion that the only authors worth studying must come from a previous era, a line of reasoning that English Departments discarded decades ago but that the general public may not have.  Not that they don’t read, or even prefer, living authors themselves, but that living authors don’t produce Literature, only books, and ideally bestsellers.  We can’t, in this line of thinking, really know an author’s place, value, or contribution in his or her own lifetime, as though authorship were akin to sainthood.

The second is what I think of as the Back to School Problem.  If you’ve seen the movie (1986), Rodney Dangerfield (who is, in fact, now dead) plays his usual self-deprecating schlub.  In the words of IMDB’s tagline, “To help his discouraged son get through college, a funloving and obnoxious rich businessman decides to enter the school as a student himself.”   When Dangerfield’s character needs to write a paper on the novels of Kurt Vonnegut (who is also now dead), he hires Vonnegut himself to do the work.  The cameo alone is funny, but the punchline is that Dangerfield fails the paper, not just because the professor knows right away that someone else wrote it, but also because “whoever did write [this paper] doesn’t know the first thing about Kurt Vonnegut.” (Warning: offensive language)

The joke, as usual I suppose, is on the professor, who, we understand through dramatic irony, only thinks she is an authority on Vonnegut’s work.  Or worse, she (unknowingly) believes that she knows Vonnegut better than he knows himself.  Despite decades of reader response theory and deconstruction, despite cases where authors themselves have claimed not to have understood what at they wrote at the time, despite authors admitting only a hazy notion of how their work would be interpreted, in the popular mind, the author is still the best, and maybe only, authority on his or her work.  Shakespeare can’t tell you that your, say, Lacanian readings of Hamlet weren’t what he intended.  Well, how could they have been?  And contemporary critics understand that intentions are not the only point—if not beside the point entirely.  But Don DeLillo can still tell you that your, say, ecocritical reading of White Noise isn’t what he intended.  Or, as he has suggested in interviews, that he never reads critical or literary theory.  And, unlike, Back to School, it would not be a joke.  If students worry that they’re not entitled to form opinions on Shakespeare because his work is centuries old, endlessly discussed, and firmly canonical, they can feel equally constrained by the living author, because they can still be proven wrong, if the author only says so.

Which takes me to my final problem.  DeLillo, unlike, say JD Salinger (who died only recently), is not only alive but still prolific.  The last decade alone has produced The Body Artist, Cosmopolis, Falling Man, Point Omega, and the new collection of short stories, compiled from 1979-2011, The Angel Esmeralda: Nine Stories.  This work alone could be the envy of many authors—consider that in about the same time, Jonathan Franzen produced a single novel, Freedom; in only a little less time, Jeffery Eugenides wrote The Marriage Plot.[ii]  So in addition to what I see as the indisputably Great Novels—White Noise, Libra, and Underworld—such an output is astonishing.     

And these works can’t help but change how I read DeLillo now.  Point Omega is almost the anti-Underworld (Overworld?), so sparse and imagistic as to be nearly inscrutable.  If Underworld overwhelms readers, Point Omega underwhelms them, by design.  Libra is often read as speculative fiction, a conspiracy-minded counter-narrative to the prevailing Kennedy history.  But rather than taking on what could have been a similar approach to 9/11, DeLillo completely eschews paranoia in Falling Man, surrendering his anointment as chief shaman of the paranoid school of literature.  And Angel Esmeralda, for me, provides the greatest pause.  Perhaps I shouldn’t admit that I had never read the first story “Creation,” published in 1979, but reading it now reveals a writer interested in mixing breezy eroticism into his usual—and now, arguably since White Noise, semi-suspended—absurdist, black humor. 

Overall, what the collection—and the past decade’s work—demonstrates is an author who is unrepentantly alive, in all senses of the word:   animated, energetic, relevant, and changing.  It gives the reader a lot to live up to, and much to look forward to as well.

Time: OK, I have to admit that I forgot to pay attention to the clock today. I know, I know, that’s my whole schtick.  Maybe 60 minutes? Probably a little over.  Not too much, though.


[i] Chances are that this isn’t even true, since many have not heard of Joyce or Faulkner or even Austen, but I like the line.

[ii] Not that these aren’t great achievements, I hasten to add, since Franzen and Eugenides are alive and likely to get annoyed at such comparisons.

Tagged , , , , , , , ,
%d bloggers like this: