Tag Archives: technology

Transference

 

DVD-Video_bottom-side

Two years after buying a recordable DVD player, one year after the threats from my wife got serious, I begin transferring the home movies of my children from VHS tapes to DVDs.  I know I’m still at least one platform behind, but any digital form is better than one that can be destroyed by light, air, and time.

Because they’re analogue, I need to play them in real time to copy them.  And as I do, I watch them, and I realize that the last time I watched them was the last time I transferred them, from camcorder cassettes to VHS.  Their entire existence rests on converting them from one obsolete medium to the next.  

As I watch, I see my young self and young wife, recent parents and, far more seriously, recent homebuyers.  I see my oldest son, now a teenager, as a baby, then a toddler, then an older brother to his new baby brother.  And I think, Ah, so young, so cute.  The kids, too.  The tapes from twelve to eight years ago show a new family in a small, snowbound Minnesota house, each of us swaddled and layered in Fleet Farm sweat clothes, the new baby in so many layers that he’s a Midwest Matryoshka.  All laughing and smiling, just joy, spinning, dancing.  Nine years, four houses, and three states elapse in two hours, and our daughter, now five, is born. 

Yet looking at these people on TV, I realize that I don’t remember the times this way. What I remember is the stress and mess, the lack of money, the ever-present question: what’s going to happen?  Not unlike now, but then even more so.   I never liked recording the movies, never feigned love or expertise manning the camera.  I always felt that parents who spent their time with a lens in front of their eyes were blocking their view of their children, already anticipating the minute when that very moment would turn to nostalgia: Ah, look at us. We were so happy fifteen minutes ago. 

But it has not been fifteen minutes. It has been fifteen years, and I can see not just how fresh but how fragile the moments were. I’m glad I didn’t film too much, the Warren Report of our lives, the volumes Proust would have filmed if he’d lived in the Midwest and owned a camera.  But I’m grateful that I have something, a few compressed flashes beyond the faded reel of my own mottled memory, and that these videos are more luminous and numinous than my mental VHS’s translucent haze.  I wish that I could transfer the images in my head to a newer platform as well, and as the last tape cuts to static, I close my eyes and imagine how today will look to the future me of the next transference, how I’ll look at the deteriorating self that I now see entering middle age, and instead I marvel at how young and thin, how thick the hair, how joyous the moments, since I have recorded proof that they will not last.

 

Time: less than an hour. Lost track.

This was published in the 2013 issue of Maryville University’s literary magazine, Magnolia.

Hourman update: despite two posts this month, still on hiatus.  Thanks for hanging in there.

–Jesse Kavadlo

 

 

Advertisements
Tagged , , , ,

We Have Entered the Era of Un-

In culture, literature, and theory, the 1960s marked the beginning of postmodernism.  And quickly the prefix post- became the operative way of understanding the world: post-war, post-structuralism, post-colonialism, post-industrialism; then, post-human, post-Boomer, and post-punk; more recently, post-millennial and post-apocalyptic; and for a least a little while in 2008, post-partisan and post-racial.   (Many a postdoc has been devoted to developing post-anything.)  Post- became more than a prefix—it became a worldview, an epistemological category.

But what, students in my class on postmodern literature reasonably asked, can possibly come after postmodernism, or post- anything? More post. Post-postmodernism. [Shudder]. Post- is the prefix that devours itself, since it is always after, belated, still waiting, and deferred. Nothing can come after post-.

Nothing except, with apologies to Existentialism, a new kind of nothing.

Enter: Un-.

Un-, like post-, is not a word. Unlike other prefixes, however, like pre- or post-, or re- or un-’s near-relative, under-, un- does not describe, affix in time, suggest repetition, or, like mis- or mal-, even suggest that something is wrong.  Unlike with-, dis-, de-, counter-, anti-, or even the powerful non-, un- does not suggest opposition, working against.  Un- suggests more than reversal or opposite: it is negation, disappearance, taking out of existence.  And if post- described the world after about 1945, Un- describes the world from 2000, or maybe 2001, to the present. We are living in the era of Un-.

Now, I realize that lots of words began with Un- before 2000.  I used “unlike” twice in the last paragraph alone. But I used it as a preposition, “dissimilar from.”  On Facebook, unlike is a verb: if you click Like, and then decide that you don’t like that thing anymore, you can click Unlike and it will erase your Like. Since Facebook does not have a Dislike button, Unlike is as close as people can get.

But Unlike is as different from Dislike as unable to disable, unaffected to disaffected, unarranged to disarrange, unfortunate to disfortunate (which is sort of a word).  Which is to say, very different.  Both suggest opposition, but dis- implies an active opposition, expending energy to reverse.  Un- feels passive, a kind of vanishing—or worse, the suggestion that the thing never was in the first place.  When we Unfriend on Facebook, we do something we cannot do in real life or face to face, which is presumably why the word had to be recently invented. We don’t Unfriend corporeal people.  We just—what, exactly?  Stop being friends? Spend less time together? Drift apart? Or something stronger—not a drift but a rift.  A fight, a falling out.  We’re not on speaking terms anymore.  But not Unfriend.  We can only Unfollow online, on Facebook or Twitter.  We can’t Unfollow in person.  Unfriend and Unfollow seem etymologically and epistemologically close to Untouchable, with the implications of prohibition, exclusion, disappearance. Unclean.

Like many people who spend time at their keyboard, I have become reliant on Delete, on Backspace, on Undo.  When I knock down a glass and wish it would float back in a startling cinematic backwind, or misplace my book and want it to reappear, or say something that I want to take back, I can picture Ctrl Z clearly in my mind’s eye.  But it does not Undo.   Glasses do not unbreak; books are not unlost but rather must actively be found (without Ctrl F, either). Words that are unspoken were never spoken, not spoken and stricken.  We say, I take it back.  But the words cannot be unsaid.  Judges instruct juries to ignore testimony, but lawyers know that jurors cannot unhear. Judges cannot unstruct.  Traumatized viewers cannot unsee.

Do not try this in real life

And so Un- fails at complete erasure.  Like a palimpsest, Un- can’t help but leave traces of its former self behind.  The close reader can see what used to be there, the residue of virtual Friendship, the electronically unsettled path left behind after one has Followed, or been Followed.  And perhaps this failure is for the best.  The only thing more powerful than Un-’s fever dream of retroactive disappearance is that the wish cannot come true.  If anything, the electronic world that birthed the fantasy of Undo is the same one that never lets us scrub our online prints away.

Time: 55 minutes

P.S. Please Like and Follow this blog.

Tagged , , , , , , , , , , , ,

A Cultural History of Spider-Man’s Web Shooters

Just point ‘n’ shoot!

As much as I love superheroes, I can’t say that the new Amazing Spider-Man movie needs to exist.  First, as long as it was being remade, time to drop the hyphen—just “Spiderman.”  It’s cleaner.  Second, the movie reminded me of seeing a high school play: “Aw!  So cute! They’re doing Spider-Man!” When Sally Fields showed up as Aunt May, I thought, “Aw! There’s Sally Fields pretending to be Aunt May!”  And then when Martin Sheen showed up as Uncle Ben, I thought,” Aw! There’s Martin Sheen! I love that guy!” before quickly remembering that he’s a dead man walking, to be gunned down before the second act ended so Peter could learn his lesson about power and responsibility.  This must have been how medieval audiences reacted to seeing Jesus-Christ show up in the passion plays: “I can’t believe he’s gonna  get killed AGAIN.”

But crucially, the movie revises, updates, and, for many fans, corrects what turned out to be a huge comic controversy of the 2002 Spider-Man.

Namely, the mechanical, wrist-worn webshooters (single word, no hyphen) are back. The organic vs. factory debate deepens.

This is a BFD.  When Spider-Man (hyphen for historical accuracy) debuted in 1962, bitten by a radioactive spider, proportionate strength and speed etc etc etc, he invented the synthetic webbing and pressure-sensitive webshooters himself:

 

Peter Parker as misfit, scientist, and genius is crucial to the early stories.  It’s not enough to get spider powers.  Much of his early success as a hero stems from the use of his pre-bite intellect and his own diligence and hard work, as opposed to mere accident: “So they laughed at me for being a bookworm, eh? Well, only a science major could have created a device like this!” And so his identification with his audience of bookworms is complete.  Spider-Man, as Stan Lee, in his usual overwrought, avuncular, carnival barker voice, introduced him earlier, is a hero like… You!  So he needs to have something comic readers can pride themselves in having; Spiderman is about smarts and perseverance, not just a lab accident. Later comics elaborated upon the original idea:

But while 1962 Peter Parker, as a non-sidekick, picked-on teen,  was unlike any of the other superheroes of that time—more like, of course, a stereotypical comics reader—he was also very much like most of the other 1960s heroes who believed in Better Living Through Chemistry.  Sputnik had been launched a few years earlier, the Space Race was on, kids began working with their chemistry sets in their rooms, and comics followed, whether to embrace the post-war American dream or just because the hero/scientist opened up new character and narrative possibilities.  Until that point, THE SCIENTISTS HAD ALL BEEN BAD GUYS!   Suddenly, Professor X (who had to open his own school to receive tenure, apparently) , bald and in a wheelchair just a Superman’s first supervillian Ultra-Humanite (hyphen?), looking like Lex Luthor, was leading the X-Men! Reed Richards took the Fantastic Four into space, then into crime-fighting! Bruce Banner started off as a nuclear gamma physicist before going green as Hulk. Over at DC, the Flash’s Barry Allen—usually thought of as ushering in the Silver Age—was reimagined as a police scientist; the new Green Lantern was test pilot/astronaut proxy Hal Jordan, whose power ring (two words) got a science fiction makeover from the previous incarnation’s magic origin. Spiderman’s invention put him in the center of the new wave of super science police.  

Forward forty years later for the first big film, though, for a changed world. The idea that teenaged Peter Parker could invent the webs himself suddenly didn’t seem realistic.  The dream that the brilliant kid his bedroom could do what millions of dollars in government and industrial research and development couldn’t? Ridiculous.  Just as important, the early 2000s saw a sudden upswing of anti-technology cultural forces—technophobia brought to the surface by Y2K, a wave of anti-factory farming, the Fight Club-style anger at the techno-corporate world, left-wing distrust of surveillance and electronic voting machines, and right-wing fears of a technologically driven New World Order. Stan Lee and Steve Ditko had devoted all of two panels for Peter to invent the webshooters. Could a multimillion dollar movie really be that casual and still be credible?  So the webs became a part of Spiderman’s new powers, his body generating them organically, leaving the film open to hundreds of snarky commentators noting that spiders don’t fashion webs from, um, that part of their anatomy. Taken together, we see a nice example of Samuel Coleridge’s famous dictum about suspension of disbelief: audiences could suspend disbelief long enough to imagine that a bite from a radioactive  genetically altered[i]  spider could spontaneously generate natural webshooters , but not that Peter Parker could have invented the ‘shooters himself—broke, without a lab,  and alone in his Queens bedroom.  The dream of technological progress was over.

My hands are making what?

But only for a decade. Today, Andrew Garfield, playing Tobey Maguire playing Peter Parker, indeed invents his webshooters again, like Kennedy’s in the White House and it’s 1962.  Yet unlike Classic Peter, he doesn’t quite invent them by himself. While it’s all a little hazy (damn you, montage!), what Nu Peter seems to do is closer to what contemporary techies get.  Instead of opening his chemistry set, he draws from preexisting technologies—some prefab Oscorp tensile-strength web fluid here, some, um, other mechanical movie-looking parts and gears and awesome LEDs and stuff that looks like machinery there.  2002 was too soon to imagine the day when every kid would not just own a smart phone—as Peter plays games on his phone to kill time while waiting for the Lizard to emerge in the sewer—but that more than a few teens would also be savvy enough to jailbreak them, invent their own apps, and create original graphic art, digital music, and code, alone in their rooms.  The basement chemistry sets of the early 1960s have given way to the new tech mythos of Steve Jobs in his garage, not inventing the computer but rather remaking and improving it based on previous iterations of the same ideas that Xerox and IBM used but somehow didn’t really get.  C. 2012 Peter’s genius isn’t that he invents the webbing and webshooter a la 1962, but rather that he recognizes that the technology for them already exists, and he makes them work together.   Only a science major post-millennial could have created a device like this.  We love technology again, but in a remix, mashup, sampling, collage kinda way.

So it’s fitting that, in the Tobey Maguire version, Natural-webbing Spidey fights techno-corporate Green Goblin/Norman Osborne, who relies on the worst of tech R&D: metal mask and body armor, disintegration grenades, and deadly projectiles; in Spiderman II, Doctor Octopus recalls the 1940s and 50s Scientist Gone Wrong, becoming a crazed metal-armed cyborg, while again Natural-webbing Spidey has to set him right and destroy the dangerous incursion of technology into the human realm. Lots of other fantasy movies of the early 2000s shared this pro-natural, anti-tech spirit: The Lord of the Rings pits the sylvan elves and pastoral hobbits against Sauromon’s metal hammers, metal towers, bio-engineered monsters, and willful destruction of trees.  In those Harry Potter movies, technology is shunted aside entirely, unable to coexist with magic at all.  In Phantom Menace, those stupid Jar Jar-looking aliens use natural weapons… ah, I can’t even continue; I hate that movie so much.[ii]  

Yes, the Lizard is a bit of a retread of Doc Ock, in that he’s a scientist whose attempt to do good results in the potential destruction of New York again, his mind altered by a biotech-transformation.  But when Dr. Connors emerges transformed into the Lizard, he sheds his lab coat and his humanity, symbolically and visually the worst kind of natural—slimy, scaly, swampy, primitive, lizard brained.  New Tech Spidey is web savvy (har har) and smart, using his—and Gwen Stacey’s—head to configure a quickie technological solution to New York City’s new alligators in the sewer problem.  OK, technology may have created the problem, but, unlike earlier incarnations of superheroism, technology can also solve it. Call it Web 2.0.

So when the techno-pendulum swings back, expect to see some other new version of the webshooters for the inevitable 2022 reboot.  And when we do, will someone please get Uncle Ben a bullet-proof vest this time?

Or the cynical explanation: you can’t sell organic webshooter toys.

Time: 90 minutes. Over, but this piece is pretty long, and I even spent at least 10 minutes cutting tangents. Plus I managed not to make any Marc Webb (!!!) puns.  It’s also funny that my conclusion—2000s Spider-Tobey is natural and fights techno-bad guys, while 2012 Spider-Garfield is technological  and fights a natural bad guy—came to me in my sleep two nights ago. Call me 24-Hour Man. 


[i] The radioactivity concomitant with the early ‘60s Cold War was replaced by new wishes and fears of genetic modification for the 2000s. But that, Dear Reader, is the subject for another exciting post! Excelsior!

[ii] Irony alert: these seemingly anti-technology movies could not have existed without their recent advances in digital technology.  

Tagged , , , , , , , , , , ,

Water and Fire: Metaphors I Blog By

Contrary to Marc Prensky‘s popular binary, I don’t see myself as a digital native, or a digital immigrant.  Rather, I am a reluctant, reformed Luddite, washed gasping onto your shining silicone shores of technology because the formerly lush pre-technology terrain has ebbed and eroded beneath my feet.  So I used a laptop as a life-preserver and floated across the digital divide, trying not to drown.  No, I am no digital immigrant, one who came here by choice following the dream of electric sheep and your Statue of Technology’s gleaming beacon, a flickering iPod held aloft. 

I am a digital refugee. 

I don’t speak the language. 

I plead digital asylum. 

But now that I’m here, I’ve come to discover that, just as there are activities that thrive in the face to face world—or, worse, “F2f,” the shorthand for what used to be called interacting, talking, or being human—there may also be opportunities that technology creates that are not pale imitations of personal contact or just more expensive versions of previous, now obsolete technologies like paper, paint, or vinyl.  Rather, there may be whole new avenues to travel, channels to explore, waters to drink. 

Two weeks ago I wrote about the things I learned after six months of blogging, focusing on how I felt to get page views and to view how readers viewed me.  And that was interesting and enlightening for me in a kind of techno-sociological way, my time-traveler’s view of my strange new home in the future.  So on the surface, the least blogging has helped me see are the ways in which I can now easily and frequently incorporate images, video, and links into posts.  It’s plenty fun and entertaining for me (and, I hope, others), which I do not denigrate.  

But it has also helped me to learn more about the creative process, something I was very interested in well before six months ago.  I started this project with the hourman concept—one topic covered in sixty minutes of writing, and, as I’ve said, I’ve mostly stuck with it.  But what I haven’t discussed is what I’ve done with that writing time.  It has occasionally been linear, the way students are forced to write essay exams in school, or the Alice in Wonderland approach: “Begin at the beginning…  and go on till you come to the end: then stop.”  But mostly, while I may spend the hour composing, I spend the day, or sometimes week before, composting, to borrow the metaphor of writer’s writer Natalie Goldberg.  Before I even sit down, and before I start the clock, I already have my topic, my angle, even if it’s vague, and preferably, my way out.  I’ve always believed in the importance of endings—one of the things I try to emphasize to my writing students is that you can’t tack on a conclusion.  Perfunctory, fake conclusions sound like this: “In conclusion, here’s what I just said.”  But now, I take them even more seriously.  Like a good war, a good piece of writing needs to plan its exit strategy before it even begins.  

But I also now build the link and image searches into my writing process as well, so that I’m not simply writing for an hour, then looking for apt an entertaining images or videos, or deciding in the editing and posting process which terms or ideas would benefit from or be bolstered by a missing link.  Instead, I Google as I go (possibly sing to “Whistle as I Work”?), and often enough, something that I see online gets me rethinking what I’m working on right then and there.  Blogging allows for a less hermetically sealed approach to writing: not the frustrated, isolated Artist on a mountaintop, quill and parchment in hand, awaiting divine inspiration—nothing that I’ve written would merit that kind of pretention anyway. But rather, writing online, using online tools, for online readers, has challenged the digital native/immigrant/refugee metaphor’s very foundation.  John Donne knew that no man is an island.  But every link, piece of writing, image, reader, and writer can become part of a vast digital island chain, a sweeping archipelago connected by legions of lightspeed Google ferries.

In addition to challenging the pseudo-Romantic cult of the lone writer, blogging has also challenged my romantic idea of creativity. Too often, we imagine writing can be blocked, as though it were a physical and terrestrial thing.  But if creativity is water, it flows and resists blockage.  Yet water may not be the best metaphor now, since water can indeed be dammed.    And while people do refer to writer’s block when they can’t produce, I don’t think that blockage is really the best metaphor for creativity or lack thereof either.  Nonwriters don’t get blocked; only writers do.  So what writers mean is that their creative process is like agriculture: it is capable of being grown, harvested, and exhausted.  We can overfarm and deplete our imaginary crops or clearcut our creative forests, leaving a fallow period of, we hope, restoration and germination.  We hope the ideas will come back, but we never know.  So when I committed to one blog post per week, I wondered how soon I might, shifting to another familiar metaphor again, burn out.  But instead I’ve come to think of the writer’s ideas as fire.  Yes, Plato, Prometheus, and Jesus beat me to this metaphor, but I think it’s a crucial one: rather than thinking of ideas as blocked vs. flowing, or developing vs. producing, we can think of them as a flame.  When we take from the fire, it does not get any smaller.  With the right conditions—air, kindling—it can perpetuate itself indefinitely, producing and reproducing at any rate.  You can’t put out a fire by taking from it; rather, that’s how you make it grow.  Creativity can operate in this way, too.  It does not need to burn out at all.

Yet even the fire metaphor falls short in describing what I’ve learned.  The commitment I’ve made to writing this blog—a commitment that has no obvious benefits, no product to push, no money to make, no political agenda, and no foreseeable purpose at all—is a reminder of the cliché about life being about the journey and not the destination. 

A little trite, though, so let me update it: life is about the journal and not the desperation.   

Time: just under an hour.  And I didn’t have this ending planned at all—it came as I wrote it. So much for what I’ve learned.

 

Tagged , , , , , , ,

Five Things I’ve Learned from Blogging

 

I published my first blog entry on December 4, 2011, or a little over six months ago. I felt like I needed a personal outlet for writing, since I spent the majority of my writing time typing comments to students on their writing and our class discussion boards. The only other writing I did was work email and slow-paced academic research and writing, at the rate of about one 25-page-ish essay per semester.  Facebook one-liners weren’t enough, and I felt like I had Things to Say.

But when and how could I do it? I decided to set the one hour rule to keep the blog from taking over my time.   I haven’t always stuck with it—in fact, more than half of these entries went at least a little over an hour, and that’s not counting some of the time (more on that next time).  But since then, I’ve written 32 entries, or about one per week, most of which were at least 1100 words, on a wider variety of subjects than I’d planned.  I even liked some of what I wrote.

As of now, I have about 20,000 page views: about 15,000 through my WordPress site, and another 5,000 or so that I’ve gotten from cross-posting everything in Open Salon.  I didn’t have any idea how many views I’d get when I began, but I dare say that 20,000 is way more than I imagined for six months. It’s no Charlie Bit My Finger or Harry Potter Puppet Pals or the singing Gummy Bear, with their hundreds of millions of views, but then again I made people read.

Ignore this picture. It’s just search engine bait.

For this entry, then, I want to share some of what I learned about blogging, the internet, and the numbers behind the scenes.

1)      Facebook works. I’ve had almost 2,000 views from Facebook. In truth, 2,000 is closer to the number I imagined I’d have by now—that is, from friends and friends of friends, not strangers.

2)      Yet I got most of my views from strangers, through search engines.  I had not been thinking about search engines, yet they provided over 9,000 referrals.

3)      Most of these views were from Google Image. The vast majority, at about 8,000. The funny thing is, I only originally included images because I could. It would be fun, like using a toy, to find and include images and, shortly after, captions, which turned out to be one of my favorite parts of blogging.   The images were what separated the blog posts from writing in a black marbled composition notebook, as I did during my teens and early twenties.

4)      But it’s not like a journal, because people can see you.  I was shocked that my piece about Metal Evolution was even noticed by—let alone linked to—Banger Film’s social media. That day gave me my highest number of single-day views, 511.

I was even more surprised when last month, the singer and bassist from The Arrows, the group who originally wrote I Love Rock n Roll and whom I compared unfavorably with Joan Jett, read the post and wrote me an email! Here it is exactly as it appeared, including the weird margins:

jk-

 I found your personal attack on me amusing,

(in your Jett – tongue in sphincter sycophant piece)

 especially after looking at your photo.

  

Since your attack on me was personal

I will respond accordingly.

 

It doesn’t matter what you think.

When you look in the mirror you

still have to see that face of yours.

 

Fact. I inspired Joan Jett in 1976 when she

saw me perform the song on TV and that’s

far more important to me than impressing

you, who will never be anything or do anything

of import except criticize people who have

accomplished far more than you ever will.

 

Good luck,

Alan M.

I was not going to respond, because I could not think of a reason to.  But then I asked myself, what would be more interesting, responding or not responding? And that became my reason:

Alan, if I may,

I’m just flattered that you read and responded to the piece. It was absolutely not meant personally. I never considered the possibility that anyone I wrote about would ever see it.  I have nothing but respect for someone who has written such a great and lasting rock & roll song.

Best,

Jesse  

I have not heard back, but then again I didn’t expect to become pen pals. I still stand by what I wrote and am still shocked to have gotten a message.  Elvis, also criticized in the same post, still has no comment.

5)      Yes, people can see me. But I can see people, too.  OK, not really. But in addition to seeing how people found the blog—again, usually via a specific search engine—I can also see people’s search engine terms.  The ones with the most views correspond directly to the likely image search—Where the wild things are (over 700 views) and a lot of permutations of Peter Pan (peter pan, piter pan, peter fan, peter pan disney, peter pan cartoon,  peter pen, peter pan characters, pan peter, and more). 

Hey, if it worked the first time…

 It’s nice to see that at least a few people probably found exactly what they were looking for in one of my posts: searchers for “conventions of time travel movies,” “death cartoon on regular show,” “protozombies,” “finn and link,” “symbolism in Mad Men,” “is don delillo alive or dead” and “hunger games hunger artist” were probably surprised that someone actually wrote about something like these topics.  And a dozen or so people were actually looking for this blog (hourman blog, the hourman blog, jesse kavadlo, jessekavadlo wordpress)!

But a few people probably did not find what they were looking for—even though ALL these searches registered more than one view, so they must have found or liked something. Here are a few other search terms that somehow led to views:

80’s metal chicks pin-ups  (must have been very disappointed), kava addiction (taken here because of my last name?), i’d rather enter the hunger games than go to school on Mondays (?), a normal person’s reaction to sparkly vampires/jack sparrow (??), you mad i do what i want loki t shirt (???), krampus sex (I don’t want to know), miss piggy in bondage (you thought krampus sex was bad).

And lots more.

 

Vixen. Too little, too late for that guy looking for 80s heavy metal chicks, but here is it.

Since WordPress added the feature late last February, I have also been overwhelmed by seeing the view’s country of origin.  Not only have Metal Evolution and the mean guy from The Arrows read my writing, but so have people in 128 countries, including Gibraltar, Mongolia, Korea, and 225 views from the Netherlands.  I’m huge in the Netherlands! 

Thanks!

I nether saw that coming six months ago. Thanks to everyone who’s been reading.  I hope the non-bloggers have learned something, and bloggers may recognize some of what makes blogging so interesting.

 Next post: what I’ve learned about writing and the creative process.

Time: one hour. I set out to write a Ten Things list but ran out of time at five. Typical.

Tagged , , , , , , ,

Live Music; or, the Song in the Age of Digital Reproduction, an Essay in Eight Tracks

 

Track 1: This is me, around 1991. I still had long hair in my dreams for years after I cut it.

Track 2: Eight years, age 15 to 23, I could only imagine music, being a Famous Rock Star. It’s hard to say how many hours a day or days a week I practiced, because it was never work.  Even then, I loved that English used to the word “play” for an instrument, because that’s what I felt I was doing. But it was as much as I could: a few hours a day, not including at least six hours a week of band practice, not including at least two shows a month, not including going to other bands’ gigs twice a week.  I held down a job (record store) and earned easy A’s in school, but I lived music.

Track 3: And then, suddenly, I didn’t.  I spent the next decade learning to be a reader, writer, teacher, husband, and father.  For years, I didn’t even have a guitar. No one knew who I used to be, who, in some sense, I really was.  Music was the secret identity I left behind.  It was too hard to be everything.  Like the mopey tween calendar montage in Twilight: Breaking Dawn, or the mopey tween sun rising and falling montage in Beastly (I need to lay off the mopey tween monsters), time passed.

And as time was passing, something interesting happened, almost behind my back: music went digital.

Track 4: I am no vinyl purist. I’ve always preferred electric to acoustic. Unlike the fans who booed electric Dylan in 1965, if my favorite heavy rock band showed up with acoustic guitars, I’d boo them. (I’m looking at you, Nirvana.) Thank God the unplugged fad of the 90s is over. I ain’t gonna work on Maggie’s Farm no more, either.

Yet I can see why the folkies didn’t feel that electric music was authentic. The electric guitar puts more steps between the player’s fingers and the listener’s ear.  Not just the vibrations of the string, but the pickup, the signal, the wire, the amplification, and the distortion—sweet, dirty, deliberate distortion—of the signal. The electric sound of the guitar’s amplification is then further captured electronically by microphones, processed even further into the analogue of reel to reel tape, then mastered onto vinyl.  So many steps in the process of producing and reproducing the sound, each step, for the purist, one further away from the original.  Not the reel but the real.

But going electric and going digital are not the same. Something about listening to all music in MP3 format seems different, the final step that remasters once more, finally and irrevocably converting the analogue sound into binary computer code, Dylan’s plaintive wail (is there any other kind?) and guitarist Mike Bloomfield’s rich squeals into a cold series of ones and zeros, compressed, then uncompressed.  Look, overall, I love the iPod, love having 6332 songs made portable, love the slightly junky, slightly tinny, slightly robotic tone, love the intrusive insertion of the earbuds jacked directly into your brain, rather than warmly, maternally enveloping  your ears like the admittedly superior hi-fi earmuffs of yesteryear. (Yes, I know you can still get them. No, I never see anyone wearing them.) But I don’t mistake what I’m hearing.  Not music exactly, but an excellent simulation: “I’m not the song, but I play one on an iPod.”

Track 5: Walter Benjamin, from “The Work of Art in the Age of Mechanical Reproduction” (1936): “That which withers in the age of mechanical reproduction is the aura of the work of art.…  By making many reproductions it [the technique of reproduction] substitutes a plurality of copies for a unique existence.”   The part that messes with people today is that Benjamin, a Marxist when the word still meant something, saw this AS A GOOD THING. The destruction of the aura could only benefit the masses.  With the artwork’s aura destroyed, the work’s hegemonic power, not artistic power, its elevated class and economic status, would disappear, since the same picture would be available to all.  Technology, and ultimately “the capitalistic mode of production,” could “create conditions which would make it possible to abolish capitalism itself.”   Yet that is not what has happened to art in the time since Benjamin wrote his essay.  Instead, the more frequently a work of art is reproduced, the more expensive and more coveted the original becomes.  Look at yesterday’s New York Times article on the subject of rich, famous art—including Munch’s Scream, mentioned in last week’s entry, now likely to “fetch” (Times’ word choice)  $150-200 MILLION.  That’s some puppy.  But music is an altogether different animal. It wasn’t records or tapes that finally destroyed music’s aura, but digital reproduction.  Music, in every sense of the term, now is free.  

 Bad joke. Sorry.

SIDE B

Track 6: Jean Baudrillard, from Simulation and Simulacra:

“Such would be the successive phases of the image:

it is the reflection of a profound reality;”

[me: i.e.,  acoustic guitar string]

it masks and denatures a profound reality;

[electric guitar string –>pickup –> amplifier]

it masks the absence of a profound reality;

[electric guitar string  –> pickup –> amplifier –> analogue recording]

it has no relation to any reality whatsoever; it is its own pure simulacrum.”

[electric guitar string –> pickup –> amplifier –> analogue recording –> converted to digital recording]

Track 7: But I started to listen to music again, and play.  A few years of noodling, riffing, realizing that the hours of play had hardened into neural muscle memory and that there was no remediation needed.  My first real foray back into playing came when I bought a new amplifier last year, a Fender G DEC 3. Not to get all ad-speak with Walter Benjamin in the room, but it’s a clever idea: build MP3 backing tracks right into the amp and loop them to simulate playing with musicians. 

As an actual amplifier by itself it doesn’t sound that great.  In fact, it sounds exactly like a digital simulation of an electric guitar amplifier. But with the simulated tracks, the simulated sound is perfect. And as recorded by my digital camera, and uploaded onto my laptop, and linked to the world wide intermesh, and fed through your speakers, who can tell?

Electric guitar string –>pickup –> digital amplifier –> digital recording –> my laptop –> internet –> your laptop

  But because it’s digital, we could reproduce it a thousand times, a million more times, and it would sound just like the original.  Benjamin missed his prediction for art, but foresaw the future of music.

Track 8: Then, not long after I got the amp, I started playing again, for real, with actual people.  And it’s not like playing with simulated tracks at all.  I could hardly eat before or after each rehearsal, and when we were done I left wracked with stomach pain. I thought it was the stress of singing after a long hiatus, the churn of old pipes and machinery, or even nerves.

But later, I realized I recognized and remembered that pain.

It was called excitement.

Same guy, same guitar, one haircut, 21 years later

Jesse Kavadlo

Time: Over again, which is becoming the new norm. Eighty minutes, not including making the amp video just for this occasion. Time to go back in time to 60 minutes.

Tagged , , , , , , , , , , , , , , ,

Game Over: When Bad Things Happen to Good Videogame Characters

Death by a thousand pixels

Two nights ago, I noticed that my boys, ages 10 and 13, looked—there is no other word for it—depressed.  Two weeks ago, I wrote about their obsession with/addiction to Legend of Zelda: Skyward Sword, including this: “for all the seeming fantasy, what the game—most games?—embodies are the very same strictures surrounding American school and work life.  Playing the game must be fun, too, I guess, but the real joy seems to be advancing to the next level—only to work toward surpassing that one, ad infinitum.”  But they didn’t look happy now.  My younger son should have been especially happy, because my older son had helped him beat a tough part, much to my chagrin—I’ve told them repeatedly that they should not play each other’s turns or games, since the playing, not the winning, was the point.  You wouldn’t ask someone to eat your ice cream for you.  They persisted anyway.

But now, they weren’t down because they had lost.

They were down because they won. It turns out that they beat the game. 

And with that victory, a kind of defeat: my doctorate of philosophy calls for a diagnosis of Existential Crisis, one that usually doesn’t set in for another few years, the nagging, gnawing, corrosive question that sets in at adolescence and, in some cases, never ceases: Is That All There Is?

It turns out that once you get to the last level, beat the last villain (in video game parlance, “Boss,” which seems weirdly Marxist to me), and rescue Zelda, the credits roll (Dear Fellow Old People: video games have credits), and play simply starts over at the beginning again. 

I asked them: what did you think would happen?  The point of the game was, as always, to kill monsters, beat bosses, acquire money (“Rupees,” which seems weirdly Asian Subcontinent), and move one level closer to finding Zelda.  It couldn’t go on forever, could it?  Did they think victory would reveal a secret code for a secret club or secret game? That a crisp $20 bill would pop out of the Wii? No, but—and here I paraphrase—they didn’t think that winning the game would feel so much like losing it.  Not just emotionally—really, all that happens after you win is that you go back to where you started, same as when you lose.

For all the scholars who suggest that video games are texts ripe for analysis, or that they even surpass more conventional narratives like stories thanks to their interactivity and player control, the end of the video game seems very different to me from the ending of a story.  As Walter Benjamin says in “The Storyteller,” readers intuitively understand all of life through the end of the story, which represents a kind of death, or through the actual death of a character:

The nature of the character in a novel cannot be presented any better than is done in this statement, which says that the “meaning” of his life is revealed only in his death. But the reader of a novel actually does look for human beings from whom he derives the “meaning of life.” Therefore he must, no matter what, know in advance that he will share their experience of death: if need be their figurative death—the end of the novel—but preferably their actual one. How do the characters make him understand that death is already waiting for them—a very definite death and at a very definite place? That is the question which feeds the reader’s consuming interest in the events of the novel.

In other words, as human beings we can never understand the full significance of our own lives, because we must live them, from our perspective, and can’t reflect on our own ending, because we’re, ya know, dead.  But we can contemplate the full life, objectively, of a fictional character, because the beginning and end of the story delineate the full beginning and end of their existence.  And so through fiction—the figurative deaths that are stories and the more real but still fictional deaths of characters, we may understand something big—Death!—that, by its very nature, eludes our grasp, and therefore we may take comfort. As Benjamin concludes, “What draws the reader to the novel is the hope of warming his shivering life with a death he reads about.” It’s uplifting.  Really.  So we think that we’re sad when our favorite characters die or our favorite stories end, but we also, on another level, feel good, or, if you’re Aristotle, experience catharsis, a purging of the bad emotions, once you’re through.

Or, as Frank Kermode understood it, narrative endings are not only dress rehearsals for death, but they are inextricably linked to our apocalyptic sensibilities: “Fictions,” Kermode says, “whose ends are consonant with origins satisfy our needs.”  The conventions of story itself dictate a beginning and an ending; for every “Once upon a time,” a “Happily ever after.” He goes on to suggest that “one has to think of an ordered series of events which ends, not in a great New Year, but in a final Sabbath.”  Or a Black Sabbath, if you’re not feeling particularly rapturous.  Kermode relates the endings of all stories to the endings of all things: narrative endings as death, but also death as a narrative ending, “the End is a fact of life and a fact of the imagination.”

But video games seem not to provide Benjamin’s comfort, Aristotle’s catharsis, or Kermode’s closure at all.   There is no Once Upon a Time or Happily Ever After, only the grim, relentless Middle—just like our own real lives.  As I wrote in the other blog, main character Link looks and seems a lot like Peter Pan. But it’s not just the pointy ears and pointy weapons, the green clothes, or the shock of hair.  Like all video game characters, and like Peter Pan, Link is, for all intents and purposes, immortal and eternally youthful.  You could make the same case, I guess, for all fictional characters—that they revert to being alive and young when you start the book or movie again.  But that’s symbolic.  Thanks to endless “lives”—the word gamers use—and concomitant reincarnation (a word no one uses) with each reset or replay, Link lives, and dies, again and again and again.  As a father, I find no sentence weighs heavier on my heart than when one of the boys tells me, when their game time is over, that “I’ll just play until I die.”  He’d like that, I suppose.  The shift to first person—“I” die, not “Link dies” or even “my game ends”—makes clear that the games are about defying death, but they also focus relentlessly, discordantly, on death itself.

You thought you had it rough?

But if Link cannot ever die, if there is no final level—since the thing resets ad infinitum—no sense of an ending, then it feels like there is also no point.  The Onion, as always, gets it hilariously right: “Video-Game Character Wondering Why Heartless God Always Chooses ‘Continue’”:  “ORANGEBURG, SC–Solid Snake, tactical-espionage expert and star of PlayStation’s ‘Metal Gear Solid,’ questioned the nature of the universe Monday when, moments after his 11th death in two hours, a cruel God forced him to ‘Continue’ his earthly toil and suffering.”  In the end, “God,” of course, is revealed to be “Orangeburg 11-year-old Brandon MacElwee,” who “offered no comment on His greater plan for Snake, saying He was ‘too busy trying to get to the part with the knife-throwing Russian girl.’” 

But players realize that they are not gods, or God, and that the never-ending levels and never-ending deaths in video games provide a different, cautionary lesson than those in stories: the ironic moral that there is more to life than acquiring points and money, more to existence than merely getting to the next level.  And I said this to the boys, concluding that “this is why I don’t let you play the hard parts for each other.  All you’re doing is speeding up the end, and it’s the playing  itself that’s supposed to be the fun part.” 

With that, my ten-year old looked at me, eyes bright and wide, and said, “I understand now.”

Time: It looked like I was gonna finish in 50 minutes, but then I decided I wanted to find the Benjamin and Kermode quotes that you probably didn’t read anyway, which took me overtime to 75 minutes.  I’ll finish faster the next time I play.

Tagged , , , , , , , , , , ,
Advertisements
%d bloggers like this: