Horror and Necropolitics: An Overview of Two Arbitrarily Chosen Films

Before I get started on this, I need to explain a bit about how this piece of criticism came about, to establish my rhetorical situation.

About five years ago at Wal-mart, on a total whim, I purchased a multi-DVD set of 50 “classic” horror movies, wherein classic means “a few famous things in the public domain and a bunch of stuff that costs nothing or next to nothing to license.”  There’s some good stuff in there – like the original Night of the Living Dead, Nosferatu, The Phantom of the Opera, and Carnival of Souls – but for the most part it’s filled with B-movie shlock like The Killer Shrews or Creature from the Haunted Sea.  At the time my plan was to watch every single one of these movies in alphabetical order and write reviews for this blog, but the plan never came to fruition because it was conceived through boredom rather than any actual drive.

However, I decided one good possibility for my Patreon would be to return to these films and, using a random number picker, select two arbitrary movies and attempt to write a comparative analysis of them.  That’s what happened this month, serving up the films The Vampire Bat (1933) and Bloodlust (1961).  The films themselves are totally different in content; the former is a post-Dracula vampire picture while the latter is an awkward ripoff of “The Most Dangerous Game” and, I’d argue, a primitive slasher film ancestor. Unexpectedly, however, they both have in common a central concern: who gets to decide who dies?

Critical theorist Achille Mbembe considers the term necropolitics as a set of political moves, the “ultimate expression of sovereignty” that resides “in the power and the capacity to dictate who may live and who must die.”  So, for example, the power of the state to order the execution of criminals, or from a critical animal studies perspective, the assumed right of humans to enact wholesale slaughter via factory farms, and things of this nature.  Of particular interest for Mbembe is the explicitly authoritarian valence of necropolitics, the version of it that arises in societies ordered around “the generalized instrumentalization of human existence and the material destruction of human bodies and populations.”  (Think here, of course, of Nazi Germany, but also the many colonial enterprises worldwide and their projects of enslavement and exploitation of certain populations.)

This is heady stuff, but for now we can consider the barest bones of Mbembe’s conception in order to think about what these two not necessarily very good horror films can tell us about the orientation of popular, white American necropolitics over the span of some thirty years.  We’ll begin at the beginning, or at least in 1933, and The Vampire Bat.

Elevator pitch: a small German town is overcome with panic after a rash of mysterious murders in which victims were left exsanguinated with small puncture wounds on their necks.  The city fathers insist that a vampiric curse has returned to plague the town, pointing to local history and folklore to prove it, while police inspector Karl Breettschneider believes there’s a more rational explanation.

The town’s suspicion turns toward a man with intellectual disabilities named Herman, who has no home but wanders the streets and survives on alms.  When he overhears a local connecting the rising bat population in town with the vampire episodes, Herman insists that the bats are his “friends” and “soft like cats”.  As the local then insists, re: Herman, he “prowls the streets all night, just like an animal” and “never works and never bathes, and yet appears well fed always.”  While Herman himself is not played in a particularly savory light (unlike more recent ideas of Oscar-baity depictions of disability in film), he’s also clearly not harmful, as the objection he looks too healthy for a homeless person is patently absurd: he’s cared for by the community, and indeed, the first (witnessed) death in the film is of an old woman who fed him and employed him in minor jobs.  Yet Herman’s proximity to this charitable dead woman only bolsters the town’s suspicions.

The Vampire Bat is doing something very interesting with vampire mythology here.  Lugosi’s Dracula was only three years old at this point, fresh in the public memory, and the idea of suave, composed, and aristocratic vampires was really starting to gel in the popular consciousness.  Of course, these ideas derived from Bram Stoker’s novel, which uses the figure of the Count to embody industrial Britain’s fears regarding a decaying but not entirely dead Continental aristocracy.  However, if you dig back far enough into vampire folklore, you find that as the superstition emerges in the Balkans and Prussia it is particular to the peasant class of those regions.  That is to say, our earliest recorded “vampires” in history were not aristocrats but peasants who supposedly had died but came back to life and looked unusually healthy.  Signs of a possible vampire, indeed, were also generally signs of good health: shining eyes and a ruddy complexion (this is aside from appearing after a reported death and drinking blood, of course).

To reduce a complex historical phenomenon to a simple read for the purposes of argument, it’s easy to see how a myth that pathologizes healthy looking peasants benefits, more or less, the aristocratic power structure dependent upon those peasants by providing a mechanism for the laboring class to be suspicious and resentful of itself rather than its masters.  What’s fascinating about The Vampire Bat, then, is that post-Dracula the filmmakers are revivifying an older version of vampire folklore, as the suspicions of Herman’s unusual good health indicate.

The finale of the film presents a final, bizarre turn of the screw, as a lynch mob descends on Herman, who jumps to his death from a precipice.  It is assumed the vampire problem is solved, and yet – just after Herman’s death a final murder has occurred.  The revelation of the final act is that Breettschneider’s acquaintance and the local doctor, Otto von Niemann, presented thus far as more or less an ally, is actually the murderer.  Sort of.  He’s also, inexplicably, a hypnotist, and he has been mind-controlling his servant to kidnap local people in the night, drag them to his laboratory, and then draining them of their blood – using the similarity to a vampire’s MO to divert suspicion from human agents.

But why does Von Niemann need all this blood?  Because, as he explains, in his experiments he has “created life” and the organism he has somehow conjured is incomplete and not self-sustaining: it needs blood to survive.  The specifics of this are glossed over and the whole turn of events is incredibly weird since, when it is revealed, the “life” Von Niemann has created appears to be a large immobile rock in an aquarium.

How this thing constitutes “life” in any sense of the term is certainly questionable.  And yet, though I might be giving the film too much credit, I think that’s part of the point: Von Niemann justifies his actions by saying he must protect and sustain the new life he has created, and yet how is this thing, this horrible immobile clod, considered exceptional or sacrosanct “life” when compared with the human life of Herman, his caretakers, or the doctor’s other victims?  The vampire, in the end, is metaphorical: Von Niemann’s Promethean technocratic delusions are parasitical upon the community he inhabits.  His role as a doctor doubles his role as a necropolitician: he is there to heal the sick, but in fact grooms some of his patients for inevitable, instrumentalized slaughter for the sake of his weird barnacle child.

Bloodlust comes a number of years later, and yet there are some striking similarities to consider.  As already mentioned, it’s a ripoff of the classic short story “The Most Dangerous Game” – a group of four “teenagers” (they all look considerably older) are on a boating trip and, because the captain of their rented boat gets blackout drunk, they decide to canoe to a nearby island, which they think is deserted.  However, it turns out the island is host to Dr. Balleau, a wealthy eccentric, his wife, another drunk man they keep around for some reason, and a crew of servants/bodyguards in Venetian gondolier cosplay.

It turns out Dr. Balleau, whose home is decorated with stuffed and mounted game animals, has decided he needs to hunt something more challenging (humans, of course) and the two dudes of our group of hero teens are his next quarry.  (Since his wife and the drunk dude are having an affair, he has them killed, and kindly informs our leading ladies he will merely keep them as sex slaves.)  The teens (fine, we’ll call them that) are led into Balleau’s secret trophy room, where he has taxidermied his wife and her lover, as well as two other men (one of whom, I will note as an aside, is the only person of color to appear in either of these films).

Both of the hunts prior to his wife and her lover, Balleau explains, were inmates at a nearby island prison who were smuggled out by a certain boat captain – indeed, the teens’ captain, who has tried to sever ties but, it is suggested, grapples with what he’s done by drinking heavily.  Anyway Balleau’s men somehow extract him from his boat and now he’s here, but he’s a huge asshole to the kids and won’t help them, instead striking out on his own (and getting killed).

As I mentioned before, one of the most “acceptable” sites of necropolitical action is the execution of prisoners by the state, and this is a function that Balleau has taken upon himself.  As he tells our heroes, one of his previous hunts was a repeat rapist, and so deserved to die – the irony here, considering his plans for the two central women in the film, is apparently lost on him.  He has installed himself as his island’s god and hence inhabits what theorists of necropolitics and biopower call sovereignty’s “state of exception” – he can, in short, dish out punishment, but under no circumstances is he subject to it.  This gives the lie to his own description of the game, wherein he insists “my life will also be subject to how the hunt goes…”  Indeed, Balleau’s occupation of the island, giving him ultimate control over a geographical space, enacts precisely the sort of colonial necropolitics Mbembe theorizes.

But there’s a backstory here.  Balleau explains that he was once a “scholar” and indeed, he worked at a museum, preparing and designing exhibits.  And then, he says, came “the war,” where he was enlisted as a sniper.  He was disgusted to learn that he took “pleasure” in killing, and as he explains in one of the funniest, overdone lines of the film, the pleasure “became a passion, which became a lust – a lust for blood!”  Bravo.

The second half of the movie involves a lot of running around and is, to be entirely frank, boring.  At one point Balleau leaves his closest henchman for dead, and he loses track of the teens.  However, they ambush him later that night, having hidden in his secret trophy room in the darkened exhibit he had reserved for their corpses.  This is the point at which Balleau truly does becomes “subject to how the hunt goes,” revealing the uncanny side of the state of exception: being exempt from all punishment means you are also exempt from all protection.  The henchman left for dead earlier now returns unexpectedly and murders Balleau, pressing him onto the mounting spikes of the teens’ would-be exhibit before dropping dead of his own wounds.

A quick read of this whole thing: there’s a suggestion here of Balleau carrying forward or embodying the horrifying trauma of World War II and its related atrocities, bringing these “teens” into confrontation with a mode of existence they, in their historical innocence, have been spared.  Achille Mbembe’s theorization of necropolitics and necropower, taking as it does the modern context of globalized warfare, ends with his idea of “death-worlds, new and unique forms of social existence in which vast populations are subjected to conditions of life conferring upon them the status of living dead” (think here of slavery in the US, or apartheid in its South African and Israeli forms, where whole populations are rendered pseudohuman in the eyes of state authority).  Despite some anachronism, I would argue that Balleau operates as a vector for precisely such a world: he holds within him the bloody storm of war and unleashes it upon these innocents.

And yet the teens – as we call them – are not, in the end, overly scarred by their encounter.  They are called to enact relatively little violence, and indeed spend most of their time avoiding it.  The final act of victory is not theirs but that of Balleu’s forsaken servant.  And here we have another unexpected alignment with our earlier film The Vampire Bat.

When Von Niemann is killed at the end of The Vampire Bat it is not the police inspector who does the need, nor one of the handful of supporting characters: it is his own servant, the man he has been mind-controlling into bringing him victims.  Both here and in Bloodlust there is an easy gloss of how evil is doomed to eat itself, that it is basically unsustainable and its own instruments will destroy it.  That’s all well and good, but it’s perhaps not totally accurate.

Both films present as their villain someone who unwarrantedly assumes a necropolitical mantle – Von Niemann’s mad scientism and Balleau’s postwar bloodlust – but also suggest that these actions are self-effacing, rather than outgrowths of a basic power structure endemic to society.  Neither the teens of Bloodlust nor the police inspector of The Vampire Bat must dirty their hands with necropolitics – by deciding and acting to kill their antagonists – because the basic fantasy that underpins both narratives is that we simply don’t have to, we are not a part of this system, and this system will deconstruct itself.

Indeed, both films argue that the servants of these systems will, in their final and noble moments, destroy them, losing their chains and their lives in one fell swoop and erasing the monstrous potential of their futurity.  But what both narratives attempt to disavow – and yet, in some ways, must acknowledge if only by curious exclusion – is how the four white teens on an island vacation and the police inspector are themselves always already beneficiaries of a society in which scientific nihilism and postwar trauma have space to grow and fester, feeding off our lives even as our lives are fed by them.

This post is funded by readers like you through Patreon.  If you like what you read, want to see me write more, and want to get a chance to choose what I write about, please consider pledging.

Between the Haunted and the Weird: The Horrific Ontology of Videogames

Oxenfree is a 2016 game by Night School Studios, a point-and-click story-focused adventure game pseudo-throwback similar to something like Kentucky Route Zero (that is to say: mechanically and tonally it mimics adventure games of the days of yore but for the most part jettisons obtuse puzzling). It concerns a group of teens who go to an island in the Pacific Northwest for a night of unsupervised drinking and fun, and we all know how that sort of thing turns out for fictional teens. At the time of its release it was compared to the similar but much bigger project, Until Dawn, which also deals with the “teens in a remote area encounter bad stuff” subgenre of horror, but this is misleading. Whereas Until Dawn‘s primary reference points are the slasher films from which this typical premise is derived, Oxenfree‘s thematic antecedent is most clearly the 2001 film Donnie Darko, a deeply existential teen time travel thriller.

Generically, then, Oxenfree is poised between science fiction and horror in a way that I think meaningfully impacts how it conveys is narrative through the medium of the videogame itself. Be warned that from this point forward, I’m going to discuss specific elements and details of Oxenfree‘s story, so if you haven’t played it and care about that sort of thing being spoiled, consider yourself warned.

Oxenfree is a ghost story, of a sort.  Edwards Island, the location of the game, is a lonely tourist trap and former military base where groundbreaking research on radio and communication was carried out during and after the Second World War.  Our protagonist and player character, Alex, travels to the island with her friend Ren, her new stepbrother Jonas, Ren’s crush Nona, and Clarissa, the bitter ex-girlfriend of Alex’s deceased older brother Michael.  This entire situation is quite understandably tense and awkward to begin with, but of course, it gets worse.

The island is notorious among local youth for the anomalies that can be heard over the radio from certain locations — things that range from numbers stations to what seems to be sourceless electronic voice phenomena.  While exploring a cave by the beach, Alex accidentally contacts something — manifesting primarily as hovering, flashing triangles and angry static — that separates the group and unleashes a lot of weird bullshit on the island in the form of uncanny recurrences and timeloops (that, from the player’s perspective, are indicated by the screen’s distortion a la a badly tracking analog tape).

So on the one hand yes, Oxenfree is a ghost story — the thing Alex has contacted turns out to be the collective consciousness of a submarine crew that was sunk by friendly fire off the island’s coast after the war.  But it is also a softly science fiction-inflected time travel story — the crew are called “ghosts” but in-game exposition suggests that they did not so much “die” as get shunted out of our “dimension” (ie, the normal space-time continuum) by the accidental detonation of the experimental nuclear reactor on their submarine.  Unmoored from the most basic laws of physics and temporality, the crew of the submarine have lost all notion of individual identity and claim to have watched the entire history of the world play out to its “demise” multiple times in multiple ways, and now long for nothing more than to find their way back into linear time by possessing Alex and her friends and living the existence they feel they have been denied.

For decades the crew has been contained in their dimensional warp, but Alex lets them out with the radio she brought to hear the island’s anomalies — “you tuned into our signal” they tell her.  And I want to think for a moment about the significance of the use of the radio here, and in particular what the game accomplishes by way of placing midcentury radio technology front and center in its supernatural shenanigans.

Media theorist and philosopher Eugene Thacker has outlined a taxonomy of what he calls “dead media,” “haunted media,” and “weird media.”  Dead media, he explains, are media where “the object is no longer in use, but the form of the object remains active” (“Dark Media — An Abbreviated Typology” 129).  The example he gives here is the Victorian-era magic lantern, a device which projected still images onto the surfaces of walls and was a common attraction in certain theaters.  We no longer use magic lanterns, but the basic operative principle still exists in the form of modern projectors.

“Haunted media,” meanwhile, are when a technology “is still in use, but in a non-normative way,” Thacker’s primary example here is “the complex interplay between the photographic camera and spirit photography in the late nineteenth century” (129).  Specifically, haunted media are noted in their “disjunction … between a contemporary artifact and its connection to adjacent fields such as religion and spirituality” (129), becoming almost darkly divine in their properties.

What haunted media do allow for, in imaginary and narrative terms, is the communication between two distinct ontological realms, this world and that one, the supernatural and the natural.  However, the other potential Thacker outlines is what he calls “weird media,” in which the “human sensorium can be augmented, transformed, or in some instances, ‘see’ more than a human subject is prepared to see” by way of some media object (132).  One example here is H.P. Lovecraft’s story “From Beyond,” where a scientist perfects a devices that allows human beings to see the various horrible creatures that exist alongside us, but outside our realm of sense perception, and which also (of course) drives people mad.

Unlike haunted media, which open up a portal between that world and this one, in weird media “mediation only results in an absolute impasse, in the strange non-knowledge of the impossibility of mediation, in the way that all communication collapses” (133).  In other words, weird media show us something, but something fundamentally flawed in its communicative result: we see something just doesn’t make sense, it is there and yet refuses to cohere into anything like purpose or meaning, and the result (as witnessed by Lovecraft’s characteristically fated protagonist) is the concomitant dissolution of all meaning.

The point I would make, first of all, is that these types of media are not necessarily distinct.  For instance, “spirit photography” existed more or less simultaneously with the beginnings of photography, with trick images appearing basically right out of the gate, rather than waiting for the medium to “die.”  In other words, a medium does not have to be dead, or close to dead, to be haunted; often they are born that way.  However, a medium’s proximity to death does seem to make it useful for stories of dark media — think here of the videotape in Ringu/The Ring, which appeared relatively close to the end of the lifespan of the VHS.

At the same time, the distinction between a haunted and a weird medium is not always terribly clear.  Thacker divides them based on a selection of narratives and, basically, how those narratives play out: is the end result communication or madness?  These distinctions, however, cannot always be made — and Oxenfree is exemplary in this regard.

Radio is not an entirely dead technology, of course, but it is certainly outmoded in the way the game presents it — weighted with the context of its development during the war, an idiosyncratic feature of the island and its particular history, etc.  At first we might say that the radio in Oxenfree is haunted, as it does what Thacker says haunted media do: it opens a portal, it brings this world and that world together, and so on.  And yet communicating with the other side is not easy, and for much of the game it’s not clear what Alex and her friends are dealing with or what it wants.

Furthermore, at various points in the game Alex becomes stuck in time loops, and must synchronize the music tracks playing on a series of ghostly Magnetophons in order to return to her proper temporality.  Just as the dead (?) submarine crew live on as garbled voices on the radio, so too are the lives of Alex and her friends mysteriously tied to the functioning of old military-issue tape players.  That is to say, they are themselves mediated by the island’s weird technology, sometimes skipping back into the past (where Alex can make decisions regarding her deceased brother that, it seems, are different than the ones she might have made before) or forward into the future where they witness deaths and suicides that never actually manifest in the straightforward plot of the game.

So while these media are a conduit for the dead past, they are also conduits for the present and a kind of undead future, possible futures, and possible pasts.  Any glance at a forum or subreddit dedicated to the game will show you they are filled with theorycrafters attempting to parse out the game’s timelines into something stable and coherent, something that can be charted in a sensible order that all adds up to a “point.”

This project is troubled by a few things about the game.  First, there are multiple endings, none of which are presented as particularly good or bad (and hence, “true” or “untrue,” since games so historically tie these judgments).  Alex’s relationships with her friends may strengthen or degrade, one of them may be sacrificed to the ghostly crew in order to placate them, her brother Michael may even be brought back to life through her interference with the timeline.  The game doesn’t pass judgment on you for any of these endings in the trite way we’ve come to expect of the medium: sacrificing Clarissa is not ideal, for instance, but given the absolute bugfuck nature of what’s going on it plays out as a kind of tragic necessity.  Similarly, bringing Michael back to life doesn’t result in some condemning “don’t play with the forces of causality!” message, it just… kind of happens.  At worst it rings hollow narratively just because we’re so used to seeing the condemnation of this sort of thing in other stories.  And similarly, if everyone survives and remains friends, well…

The game ends with Alex narrating “what happened next” for everyone like any good teen movie.  As I said, outside of being erased from existence, none of the end results for anyone are particularly “bad,” some are just sadder than others.  But in the final few seconds of her narration, the screen distorts again, and Alex resumes talking about how though she’s not looking forward going to Edwards Island, it may be a fun night.

No matter what you do, the game begins again.  Except, of course, if it doesn’t… completing the game unlocks a so-called “New Game+” option, where you are treated to a bonus opening scene of Ren, Jonas, and Alex hanging out waiting for their ride.  Alex uses her radio during this scene and receives a message from herself, warning her not to go to the island; if you choose to listen to her warning, the gang stays in for the night and the game ends, its entire plot summarily averted.

Now here’s the thing: at no point in the game you play can you make Alex deliver the message she receives in this bonus scene.  EDIT:  Zaratustra on twitter pointed out that if you complete the New Game+ as if it were a normal game, ignoring the warning, you actually do get the choice to deliver this message to a past Alex — that is, you can save an Alex you have not played from looping through everything.  You save someone, but not yourself.  You render everything you have just done meaningless (because it will never have happened) but also direly important (because it had to happen in order for it not to happen).

And this is how the game, to get to my point finally, collapses the haunted into the weird, because it’s not clear what is communicating here, and what or why it is even communicating.  The game recedes indefinitely into itself in a way that is not left for us to explore.  The addition of time travel (or, perhaps, the movement between distinct timelines, much like the submarine crew blasted outside of all continuity) means that what sometimes (in Thacker’s terms) operates as haunted media (communication between two ontological orders) also sometimes devolves into weird media (the transference of madness inducing nonsense, a kind of excess of information that makes coherence impossible).

In the end, there is a sense in Oxenfree that things are overmediated, too complexly bound up in each other, done and redone and undone, until all meaningful difference is lost in a sea of noise like the analog static the game deceptively renders on my digital monitor.  For at its most basic level, Oxenfree is a videogame that is making itself known to us as a videogame, as a site of weird media, or overmediation.

As I said, some media are haunted at their inception.  In Oxenfree this is especially true, encrusted as it is with the signifiers of analog media it has supposedly surpassed and rendered “dead” (and yet, what is my wi-fi connection but a sort of afterlife of the radio technologies developed by the island’s engineers?).  But more to the point, Oxenfree is suggesting that games as a medium are both haunted and weird, constantly warping between these two poles as they connect disparate orders of communication or devolve into madness-inducing nonsense.

I have written before about how haunting can serve as a vocabulary for how players experience gameplay.  Gameplay is always already underwritten by expectations mediated to the player by prior games, and by prior playthroughs of the same game.  In its turn to the weird, Oxenfree makes this point quite literal: at various points in the story, Alex is confronted by a ghostly version of herself in a mirror.  It speaks to her, giving her advice that seemingly makes no sense (for instance, telling her to advise Michael to break up with Clarissa, despite the fact that Michael is already dead).

This is the weird: communication that runs into the limit of intelligibility.  However, as the game progresses, it becomes clear Alex’s reflection is giving her advice about specific moments that take place later in the game.  In the climax of the game, Alex finds herself “on the other side” with the dead submarine crew, and in a series of vignettes is transported to shadow versions of various locations from the game where she provides advice to herself  — now on the other side of the mirror.  Communication between the natural and supernatural, between one timeline and another: what was weird becomes haunting.

But this is what is truly remarkable: you do not have to listen to the advice your reflection gives you, nor do you later have to give advice to your reflection that jives with (or departs from) your own actions in the game.  It is up to the player to decide how trustworthy their reflection is, and in the end, to decide how they might have done things the same or differently.

In fact, what happens is this: the game searches your friends list (through Steam or whatever service) to find people you know who have already played Oxenfree.  When you see Alex’s reflection early in the game, this person’s username appears above it in bright green text in a visual evocative of an MMO.  The dialog choices made at the end of the game by your friend (in my case, an Alex who was hilariously named “Chopper Dave”) are presented to you, and at the end of the game, your dialog choices are sent along to the next person in your social circle to play the game (so if you ever see an Alex named “Richard Plantagenet” — hi).

What Oxenfree quite literally enacts here is the haunting of gameplay: your experience of it always already bears the uncanny impression of a prior playthrough that was not yours, an attempt to communicate or give advice about how you should play the game.  But this communique is fraught by all manner of weird problems: first, you have no idea what is happening, and second, you might not listen.  Thus the haunting of gameplay again collapses into weird gameplay: not communication between or across playthroughs but the potential simultaneous existence of mutually exclusive in-game “realities” connected by their very refusal to resemble one another.

Oxenfree, then, is an apt demonstration for the horrific ontology of videogames.  Not only does Alex’s endless looping through the various endings suggest the idea of replay, the game itself metatextually and mechanically links these ideas, forcing us into an uncomfortable conceptual space that narratively challenges the ways by which we defend everything from the importance of individual identity to the very possibility of meaning-making.

Are we — the mass of players — meant to stand in for the lost crew who hope to find something like “life” in possessing these kids?  And what does it mean that the game in practice so intransigently deflects what the ghosts say they want: stability, continuity, identity, linear growth.  “Oxenfree” is, after all, a cry to end a game, to signal to the players that the game has finished.  But in Oxenfree no such ending is forthcoming, and we are left to confront how one can make meaning and find happiness in a weird, haunted, overmediated world.

This post is funded by readers like you through Patreon.  If you like what you read, want to see me write more, and want to get a chance to choose what I write about, please consider pledging.

On Borges, “Shakespeare’s Memory”

The following was originally written as part of a brainstorming session for an article I coauthored with Matthew Harrison for an edited collection on Shakespearean “users” — of academic and nonacademic varieties.  Our final product drifted from the texts below in its final analysis, but the claims and insights with which we began were nevertheless informative.  I’ve reproduced this opening salvo because I like it and want to keep it around.

Jorge Luis Borges’s 1983 short story “Shakespeare’s Memory” – Borges’s final short story, as it happens – is the narrative of a German literature professor named Hermann Sörgel who, during a conference in London, comes into possession of the memory of William Shakespeare.  It is passed along to him by another academic, who received it from a dying man while he worked as a physician in a field hospital during World War I.

Borges’s narrator asks for clarification, and the man, a South African named Daniel Thorpe, responds: “What I possess … are still two memories—my own personal memory and the memory of that Shakespeare that I partially am. Or rather, two memories possess me.  There is a place where they merge, somehow” (Collected Fictions, Kindle edition).  Of course, the boon is accepted.

Despite his unusual situation, Thorpe is not a particularly distinguished scholar – in fact, he admits his gift has produced work that garnered only mediocre reception – and Sörgel finds “that his opinions were as academic and conventional as my own.”  Yet sure enough, as time passes, Sörgel discovers himself muttering bits of unknown Chaucer, pronouncing familiar words in an unfamiliar cadence, and dreaming of the faces of men he half-remembers as Chapman, Jonson, and a nameless neighbor, “a person who does not figure in the biographies but whom Shakespeare often saw.”

Sörgel’s situation is in some sense a literary critic’s dream.  He has achieved ultimate access to the “real” Shakespeare, a kind of “first-person” Shakespeare that creeps on slowly but is nevertheless felt as immediate, effacing the normal reconstructive and mediating practices of reading, archival research, and scholarly speculation (cf Bolter and Grusin). Eventually, he tells the reader, “the dead man’s memory had come to animate me fully,” and he describes his pleasure at the various small details of Shakespeare’s work he came to understand.

Of course, things soon enough take a turn for the unpleasant.  Sörgel contemplates writing a biography of Shakespeare with his knowledge, but realizes that having Shakespeare’s memory does not make him any better of an (auto)biographer, and he is ill-suited for the task.  He also, it seems, becomes desensitized to the banality afforded by the memory, and eventually decides that a biography would be pointless: “Chance, or fate, dealt Shakespeare those trivial terrible things that all men know; it was his gift to be able to transmute them into fables, into characters that were much more alive than the gray man who dreamed them, into verses which will never be abandoned, into verbal music.”

It might be our first instinct to read this admission as an expression of Borges’s own formalism or aestheticism, to allow our memories of Borges’s views on art to explain their peculiar turn of the narrative to us: biographical context falls short of the pure power of poesy’s “verbal music.”  Shakespeare, a “gray man,” knew the universals of human experience and was able to write them into fables more interesting than life itself.  And surely such a reading is warranted, but there may be something else at work if we consider the other point at which Sörgel’s gift proves a curse.

In time, Sörgel begins to forget who and where and when he is: “I noted with some nervousness that I was gradually forgetting the language of my parents. Since personal identity is based on memory, I feared for my sanity.”  Indeed, his memory is not separate from Shakespeare’s, but the two intermingle, leading to increasing moments of confusion and panic: “One morning I became lost in a welter of great shapes forged in iron, wood, and glass. Shrieks and deafening noises assailed and confused me. It took me some time (it seemed an infinity) to recognize the engines and cars of the Bremen railway station.”

Shakespeare becomes corrosive, eating away at Sörgel’s sense of self, and in the process not only is Sörgel almost lost, but so is his appreciation of Shakespeare.  The curse is only lifted when Sörgel, dialing random numbers on the telephone, passes the memory on to a stranger who accepts the boon, as he had done before. But Sörgel discovers that he is not wholly cured.  He leaves the study of Shakespeare for first Blake and then the study of Bach, but in a short postscript dated 1924, he adds that “at dawn I sometimes know that the person dreaming is that other man. Every so often in the evening I am unsettled by small, fleeting memories that are perhaps authentic.

Like the haunted videotape in the horror film The Ring, Shakespeare’s memory is viral: infective, parasitical, and only relieving the sufferer when they pass it along to another host.  Of course, the terror of Borges’s story is more subdued than that of a horror film, more philosophically and existentially oriented, but I think it might do us well to consider what the story illuminates apart from the obvious reading of Borges’s own avowed aesthetic theories.

Bruno Latour, in his critique of what he calls “the Modern Constitution,” remarks that the “moderns have a peculiar propensity for understanding time that passes as if it really were abolishing the past behind it” – he calls this “calendar time,” which “situate[s] events with respect to a regulated series of dates” (We Have Never Been Modern 68).  Latour’s moderns think “they have definitively broken with their past,” but this experience of temporality ignores the way “the past remains, and even returns” (Latour 69).   This is what Linda Charnes has called “the non-linear ‘events’ of affective time,” which are “events which seek, and sometimes find, their representational truth only in the non-narrativity of bodies” (“We Were Never Early Modern,” in Hamlet’s Heirs, Kindle edition).  Charnes argues that the corpus of Shakespeare – textual primarily, but also the imagined body of the Bard himself – provides one arena that Western culture makes into such a site of “significant intensity,” letting us “attempt to locate ourselves as historical subjects” inhabiting a world marked by the passage of “meaningful time.”

For Latour, the return of the past is viewed by the moderns as an incomprehensible terror of “archaism,” a backsliding that, though it reverses time’s arrow, works to maintain the idea that temporality is purely linear (69).  This terror is precisely what the postmodernist Borges’s story figures: by effacing the differences between past and present, Sörgel’s assumption of Shakespeare’s memory threatens both his and Shakespeare’s historically embedded subjectivities, abolishing totally the passage of “meaningful time” in favor of a “significant intensity” of pure existential panic.

Such a line of thought abuts Jameson’s critique of postmodernism’s tendency toward pastiche, or the historicist point of view that we can only really make sense of the past when we remember it is the past and hold it at arm’s length.  But again, the problem for Borges’s narrator is not so much that he fails to historicize, but that the historicist impulse fails him: tapping into the unmediated past destroys the structures of meaning and feeling that allow the others around him, without such access, to produce meaningful experiences out of the past and out of literature.

What Borges’s story helps reveal, then, is that all literary scholarship is in some way founded upon what my friend Matthew Harrison has called affective anachronism, an impulse to “feel backward” (to adapt Heather Love’s term from another context).  Borges does not simply say that an immediate knowledge or experience of historical context robs literature of its power, but rather that it produces a distinctly different – and, as Thorpe’s and Sörgel’s situations as perpetually mediocre scholars show, not necessarily academically fecund – pleasure in the text.  It is in fact the process of feeling backward itself that constitutes viable scholarship.

The academy, it turns out, is less interested in the immediate knowledge that Shakespeare more often thought of the “moon” as “Diana” than one might at first think; in other words, the uses Shakespeare affords scholars are in fact quite distinct from what actually accessing the “real” Shakespeare might mean.  Immediately “knowing” the past robs it of its generative power as a site of both narrative and affective production.

Borges’s story suggests that finding oneself in Shakespeare (or Shakespeare in oneself) is profoundly numbing, but does this mean that an academic approach to Shakespeare is a sort of narcissism, one where we’d rather not find Shakespeare, but only our own ideas?  Or turning (forgive me) to Lacan, if our work as scholars is inherently narcissistic, is it defensible to say that academic Shakespeare is a kind of méconnaissance that simultaneously constitutes an image of him and yet fails to capture what we feel must be the “real thing,” a trompe-l’œil where something escapes, and that something is what makes Shax meaningful?  There’s an ambivalence here, in which we want Shakespeare to bolster our ego (provide us with our examples, illustrations, proof) while also resisting us (because such resistance affords the sense that our work emerges from a set of differential matrices that gave it singularity and significance).

Kitty Horrorshow’s Pontefract and Shakespeare as author-medium

Pontefract is a 2012 Twine Gothic horror game by Kitty Horrorshow.  In this blog post I will talk about the game generally, but specifically my aim is to tentatively theorize how Horrorshow’s game makes use of Shakespearean allusion, what affordances its buys her as a creator, with the overall goal of opening up questions of what this might mean for us (me and my cohort) as Shakespearean  and early modern scholars.

In Pontefract, the player takes on the role of an unnamed character, perhaps a knight, in a Gothic fantasyscape.  You work your way through several rooms of a semi-abandoned castle, populated only by apparently undead humans.  Primarily how the games works is this: you enter a room.  The room is described, sometimes with occasional observable details (for instance, when entering the kitchen, instead of directly confronting the cook you can check out what she’s boiling in her cauldrons).  If there is an NPC in this room, they will ignore you, instead carrying out routines (praying, cooking, being eaten by a floating horse’s head) that bespeak either their undead qualities (ie, they are zombies, not fully human, and only carry out certain deeply wired routines) or their artificiality (they are, in the most literal sense, videogame NPCs, written only to carry out certain limited, repetitive behaviors).

You can choose to interact with these characters, at which point you are presented with two options.  The first is always “friendly” — you either attempt to get the NPC’s attention, or help them if they seem to be in trouble.  The second is always hostile, and involves drawing your sword to kill the NPC.  For three NPCs you meet — a priest, a stablehand, and a cook — choosing the friendly option will result in your character’s death.

Progression in the game involves killing these NPCs.  After being slain they leave you with keys which will unlock the door to the castle dungeon.  You know you want to do this — apart from the fact that a locked door in a videogame always implies the goal is to open it — due to an encounter with the fourth NPC in this section of the game, the so-called “Pale King,” who sits eyeless and presumably also undead in the castle’s throne room.

This is the only NPC with whom you have no options for interaction.  Instead, when meeting him he speaks “into your thoughts [with] a hundred clamorous voices”:


Somehow, you understand that the ‘terror’ of which the pale king speaks is locked away within the castle dungeon.

You take knee before the king and vow to rid him of that which grieves him so, before standing and turning to descend the stairs back to the great hall.

The line spoken by the Pale King is from Shakespeare’s Richard II, very close to the end of the play, and is curious enough in and of itself.  Henry Bolingbroke has recently deposed and imprisoned the rightful king, Richard II, and named himself Henry IV; in Act V, scene 3, Henry uncovers a plot against him by some nobles loyal to Richard and has most of the conspirators put to death.  In the next scene (V.4), a nobleman named Exton enters with his servant.  The scene is brief, so I will reproduce it here in full for you to see just how odd it is:

Didst thou not mark the king, what words he spake,
‘Have I no friend will rid me of this living fear?’
Was it not so?
These were his very words.
‘Have I no friend?’ quoth he: he spake it twice,
And urged it twice together, did he not?
He did.
And speaking it, he wistly look’d on me,
And who should say, ‘I would thou wert the man’
That would divorce this terror from my heart;’
Meaning the king at Pomfret. Come, let’s go:
I am the king’s friend, and will rid his foe.

So Horrorshow’s Pale King quotes Henry IV, but only as he himself is quoted by Exton.  The scene to which Exton refers, in which the king speaks these lines, is not one we ourselves are allowed to see: the previous scene where Henry uncovers the plot against him contains nothing close to the statements that Exton attributes to him.  In fact, going thoroughly from the text, Exton hasn’t even shown up prior to this point in the play.

This scene seems to pointedly highlight the lengths to which the ambitious Exton is willfully misinterpreting the situation, if not in what Henry is referring to, at least in the fact that Henry is personally addressing the order to him: “And speaking it, he wistly look’d on me, / And who should say, ‘I would thou wert the man'[.]”  (Compare Horrorshow’s: “Somehow, you understand that the ‘terror’ of which the pale king speaks is locked away within the castle dungeon.”)

Indeed, the play ends with Exton presenting Henry with Richard’s corpse and Henry, horrified at what has been carried out in his name, disavows himself of Exton and the act committed for his benefit (though, of course, he does benefit).

In Horrorshow’s game the command is given directly and unambiguously, placing us in the shoes of a character who is and is not Exton.  It should come as no surprise to a player familiar with Shakespeare that when you venture down into the dungeon what you find is a weakened, miserable figure “you” immediately recognize as the “rightful king.”

Again you are presented with a choice: to peacefully beg forgiveness from the rightful king, or to kill him.  As before, the peaceful option proves ineffectual,  but this time, not because it kills you.  Rather:

You attempt to kneel before the rightful king, ready to apologize for your wrongful deeds and vow yourself to his cause, but your body resists you. The castle shudders and the walls begin to wail, and your head is filled with the lurching, ragged language of the stones.


At this point you again have the same choice, and the only way to move forward is to kill the king.  The game ends immediately after: you die as the castle collapses around you, but almost immediately you find yourself once again in the woods outside the castle gates, preparing to enter.  The implication, perhaps, is that you are no different than the creatures that trace their endless, undying routines within the castle walls: as a player, you are finally robbed of the agency the game has dangled in front of you at every turn with its false choices, and you are at last subsumed into the machinery of the Gothic landscape.

Appropriately enough, Horrorshow’s hypertext game seems to adapt and extend Gérard Genette’s pre-Internet idea of hypertextuality as “any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary” (Palimpsest: Literature in the Second Degree 5).  Rather than a simple allusiveness, or even a dense and methodical rewriting (eg, as between Homer’s Odyssey and Joyce’s Ulysses), Horrorshow’s references to Shakespeare are more like the hypertextual apparatus of Twine itself: links that send us outside the text, or into another text, or a different part of the same text, but which do not do so to make a claim about Shakespeare or Richard II.  Rather, both texts become hypertexts, existing in tandem or parallel, creating a space for thematic echos and reader (re)orientation.

Exton makes a choice; we do not.  Exton must interpret what he will do; we must interpret what we have done, if we have done anything. It is Exton who allows the play its end, and despite his abjection, the consequences of his actions haunt the rest of Henry IV’s reign.  Our actions have, perhaps, no lasting effect in the larger context of the game’s endlessly looping plot, as we are simultaneously trapped within and enabled by the haunted house that is the game’s architecture.  Apart from Shakespeare, then, I would say Horrorshow’s game is commenting on the heroic power fantasy of videogames and the exhausted narratives of aggressive but ultimately impotent of bloodshed they often foster.

As a matter of fact, Horrorshow’s original post about the game makes no mention of Shakespeare at all, and so it’s possible many who played through it did not note the allusions if they had no foreknowledge.  The game is deeply allusive, but the allusions only “activate” for a player quite attuned to Shakespeare’s play — and nevertheless, the allusiveness is not present in any way that would seem to lessen the enjoyment of a player who didn’t know Shakespeare but who was very familiar with the Diablo game franchise, text adventures, or someone who wanted to poke around a haunted castle.

Overall, the game draws deeply from Shakespeare while also meticulously managing the impact of its Shakespearean connections through a variety of tactics, including letting its allusiveness go unspoken, choosing its allusions obscurely, or interweaving its allusions with formal misdirection.  Indeed, the “living fear” Exton says Henry decries is interpreted as the deposed king imprisoned at Pomfret — Shakespeare’s name for Pontefract, the actual castle where the historical Richard II was held captive until his execution.  Thus the games title is itself an allusion that displaces Shakespeare as a central, authoritative voice of historical record, underscoring the gap in terminology between our understanding of history and his.

Furthermore, Richard II is not a play that looms large in the popular consciousness, or at least, not large enough for Shakespearean capital to immediately pay off in a gaming environment as it does, say, when the text at hand is Hamlet or Hamlet or Hamlet or Hamlet or Hamlet.  Indeed, the lines from Richard II in Pontefract are not the most memorable of Shakespeare’s lines; they’re not even the most famous lines from Richard II.  Nevertheless, Horrorshow puts her obscure citations to work.

After beheading the rightful king, the castle appears to collapse and you hear the severed head whisper to you the game’s second direct lift from Shakespeare: “Grief boundeth where it falls.”  This is not, as it happens, anything spoken by Richard, but rather a comment made by the Duchess of  Gloucester near the very beginning of the play (I.2) when she is urging John of Gaunt to stand up for her husband (whose death she believes Richard sponsored), and implicitly foretelling the whiplash of political instability that will come to shadow the reign of Henry IV.

In Pontefract the player is primed for this line differently, as you descend to the dungeon and the game tells you,

The castle whispers to you.

Dost thou at ev’ry hail draw out thy sword?

From whither comes this eagerness to slay?

Thy lust for blood and anguish sees thee curs’t

These three lines of blank verse generically meld with the Shakespearean quotations, though they are not themselves Shakespeare (as far as I can tell, they are original).  Thus, any player not explicitly looking for Shakespearean allusions might be inclined to read the actual quotations from Shakespeare — if they seemed somehow stylistically distinct from the game’s narrative voice — as of a piece with this verse.  The final word in the quote above is a hyperlink, which takes us to a closing line:

ttO suffERRr EverR thISSs accuRRSSedd dDAyy

The styling of the text here — breaking with typographical convention to suggest the words are being spoken/thought in a hiss, or by an inhuman voice — recurs not only in her original post about the game (“P0ntteEFFraccctTTt”) but in the game’s code, where Horrorshow has named several passages after direct quotes from Shakespeare’s play in the same style:

Click through for a larger image. Highlighted areas show where passages in Twine have been named with Shakespearean quotes.  This is only a section of these instances.

It was not until the game was re-collated in a directory page that the author’s note made the Shakespeare connection clear, “inspired by” Richard II, which provides the reader with an introductory signpost for the allusions.  I don’t meant to imply that Horrorshow is somehow “coming clean” about her allusions, but rather, the broad and subtle nature of the game’s allusiveness indicates a way of approaching Shakespeare that makes productive use of his corpus while insisting it is not the only corpus that matters.

Horrorshow’s Shakespeare is not an impeachable paragon of literature and humanity; he is the writer of Richard II as well as Hamlet, and also the author of dozens of less than memorable lines, dozens of less than memorable images.  Neither is Horrorshow’s Shakespeare an academic Shakespeare, a layered site where the machinations of cultural poetics are put on display if we perform an anatomy with right critical tool.

However, there is indeed something here of the Foucauldian author-function.  As Marjorie Garber has argued regarding the great dearth of personal and biographical information we have on Shakespeare, it is possibly exactly this dearth that makes Shakespeare such a literary powerhouse: “Freed from the trammels of a knowable ‘authorial intention,’ the author paradoxically gains power rather than losing it, assuming a different kind of kind of authority that renders him in effect his own ghost” (Shakespeare’s Ghost Writers 15).

Garber argues it is precisely Shakespeare’s ghostly nature that allows him to “possess” writers as distinct as Marx, Freud, and Derrida, whose use of his texts as examples for their theories means those theories forever thereafter exhibit the marks of a Shakespearean ghost-writing process.  But I do not think we can say the same about Horrorshow’s game: her allusiveness is never to Shakespeare-as-such, not like, for instance, the way Freud “uses” Hamlet to explain his thesis of repression.

I would like to suggest, then, that Horrorshow and Shakespeare work collaboratively.  What I mean is Shakespeare becomes not so much an author-function but an author-medium.  By “medium” here I mean something akin to what Marshall McLuhan means when she says “All media are active metaphors in their power to translate experience in new forms” (Understanding Media 85).  This is similar to the way in which Garber argues Shakespeare ghost-writes Freud, Marx, and Derrida — there are things these writers wish to articulate, and Shakespeare provides the vocabulary for doing so.

But it is always Shakespeare’s vocabulary.  The authors work to preserve a whole and bounded idea of “Shakespeare” outside their own texts.  Horrorshow’s Shakespeare, however, becomes an active but epehemeral metaphor for the experience of authorship and creation.  Is Shakespeare ghost-writing Pontefract, or is Horrorshow ghost-writing Shakespeare?

Her textual use of Shakespeare blurs the boundaries between her in 2012 and him in 1595.  His blank verse appears alongside hers; shreds and patches of his words appear in the very underlying structure of of the game, rewritten in Horrorshow’s own typographical idiolect, meaning nothing in situ, hidden from the player, but serving as the connective tissue between the blocks of the story.

In the end, the game is not “based on” Richard II or an adaptation, but “inspired by.”  Horrorshow makes use of Shakespeare as one part of an available arsenal as a creator and — perhaps, disclosing now that my interpretation of Pontefract is as precarious as any one might offer — to express her interests and concerns regarding games and the stories of power and responsibility they can dramatize for us.

Digital Humanities and the Digital Classroom

The following is the text of a brief talk I was invited to deliver as part of the opening graduate student roundtable at the Indiana University Interdisciplinary Graduate Conference on March 26, 2015.  The conference theme was Breaking Futures: Imaginative (Re)visions of Time, while the roundtable theme was “Digital Humanities in Practice.”  I was joined by Lydia Wilkes, Mary Borgo, Whitney Sperrazza, and Erika Jenns, whose talks provided grounding for a rich dialogue for the many overlapping “digital” futures of the humanities, both in the classroom and in research.  It was a wonderful experience, if you want to see more from the conference, trawl the hasthtag #IUIC15 on twitter to see the archive of live-tweets.


“Are you available for in-person office hours?” is a question I receive, in various forms, at least once a week.

For the past ten months or so I’ve been working with Lydia Wilkes and Justin Hodgson to build and implement an online version of English W-131, the intro to composition course most of the graduate students in here in the English department teach or will teach at some point.  This semester has seen the three of us piloting the course, personalizing it based on the framework we built collaboratively.  It’s my first time teaching online and, as such, has given me a reason to stop and reflect on what it means to practice digital humanities in the classroom; here, for me, the issue of the digital humanities necessarily emerges in the space of the online humanities classroom, since it raises questions about the technologies we use to facilitate education not only in face-to-face interactions, but how those technologies necessarily do or can reconfigure facilitation across greater spatial and temporal boundaries.

I’m not sure if before this semester I would have called myself a “digital humanist.”  Frankly, I’m still not sure that’s a label that I’d embrace.  Part of this is because – to put in it in the pithy and cynical way I developed when I was an undergrad – what will happen with the digital humanities is exactly what happened to the cellular phone: just as the latter became simply a phone, so too, I think, technological and computational creep will eventually become par for the course for doing any sort of work in the humanities.

Despite my suspicion that something about this still holds true, I now recognize that my too-cool-for-this-English-major-senior-capstone bon mot enacts a form of what Mark Sample last year called “facile thinking” about the digital humanities.  Though he uses this phrase to refer to the strawmen arguments of many DH alarmists and skeptics, I think it could also characterize the tacit way in which I rendered myself and the field unto the DH geist.

“[F]acile thinking strives to eliminate complexity,” Sample writes in his blog post on the subject, “both the complexity of different points of view and the complexity of inconvenient facts.”  By contrast, he says, the digital humanities and writing on them needs to evince more “difficult thinking,” a mixture of “evidentiary-based reasoning” and acknowledgment of divergent perspectives that adds up to what he calls a “rational empathy.”  In other words, by consigning to an inevitable digital ascent and assimilation, I primed myself to overlook the oddities and complications encountered in this transition.  For my students especially, the emotional and material stakes of education are far weightier than smartphones.


So, then, back to my opening, which by this point you may have forgotten: “Are you available for in-person office hours?”

I commute into Bloomington irregularly.  In this way, teaching online has been something of a relief for me, so my office hours are usually also online.  However, because occasionally I do have to be in Bloomington, and because the students in these pilot courses are all on campus, sometimes my office hours are in-person.  What I discovered, however, is that my students want to meet me in person far more frequently than, first of all, they actually can, and second of all, than I have ever experienced in my time teaching in a face-to-face classroom.

I assumed students who were okay with taking an online course that met once a week via videoconference would be okay with having office hours in a similar format.  One has to imagine, at least, that they feel comfortable enough with technology to take the plunge on the online course, anyway.  What I discovered, however, is that digital office hours are the most unpopular type of office hours I have ever had.  In fact, the only times students have met me in digital office hours are when I have explained to them that I wasn’t going to be on campus any time soon.

Indeed, another thing I have discovered is that the students in my online course are far more anxious about technology in general.  If an assignment or module posts with a typo or misdirected link, within an hour I’ll receive at least three emails – usually sounding mildly panicked – asking me for clarification and guidance.  When students take online quizzes and browser issues or an accidental page reload wipes or otherwise malforms their work, I receive lamentations explaining what happened, hoping I’ll be merciful.  The stakes in these instances are relatively small – a pietá over, at most, two or three points in a class scored out of 1000 – but the students’ frustration with the system is often palpable.  The obvious thing that has happened is that the technology has become more central in the students’ experience.  Rather than supplement my in-class lectures, the LMS is now the primary way of completing work.  When the tool fails, the student’s immediate fear is that, from my perspective as an instructor, this is also their failure.  These classroom technologies become more conspicuous as things that separate the students from the class and what I suspect they understand as the “real” me.

To provide evidence for this last assertion: the desire for in-person office hours is often framed by my students as a need to find out what “you” really want.  This is familiar rhetoric: I’ve heard it before in meatspace classes.  But I’ve heard it more frequently, and with a stronger valence of confusion, with this online course.  One student told me she wanted to know about what she called “your ideals,” and explicitly stated she felt like the online nature of the course had kept her from finding out what I wanted on our assignments.  Again, this is not a complaint unique to online coursework, but I think it’s important that in this scenario, technology can and does take the fall.


In the preface to his 1659 translation of the Czech pedagogue John Comenius’s Orbis sensualim pictus, one of the first illustrated textbooks, English humanist Charles Hoole explains how the innovation of adding pictures to the book, alongside parallel vernacular and Latin captions, will allow students to pick up Latin much more easily and quickly than ever before.  The reason for this, he argues, is that the sensual quality of the illustration and a preexisting knowledge of vernacular English allow the student to ground the Latin in a personal, experiential reality inaccessible when one is simply laying out grammatical rules.  This is incredibly important for Hoole, as he writes it is “the very Basis of our Profession, to search into the way of Childrens taking hold by little and little of what we teach them, that so we may apply our selves to their reach” (sig b1v).

What strikes me is Hoole’s commitment to the needs and limits of his students, based on a generalized sense of their day-to-day experiences.  The basis of our profession, he says, is to “apply ourselves to their reach” – to meet them halfway, and then move further along together.  I am reminded, actually, of Lisa Spiro’s argument that what defines the digital humanities is not necessarily the computational analysis of texts, but rather “collaboration, openness, and experimentation” as it is afforded by new technologies (“This Is Why We Fight”).  I am not arguing that the digital humanities will allow us to rediscover some forgotten or lost element of humanistic education.  But I would like to suggest that in his bid to defend the utility of the picture book, Hoole is engaging in precisely the “difficult thinking” Sample advocates, though his humanities are analog: he considers the perspectives and needs of his students and then does his best to search out technologies that will help him meet those needs, developing what Sample calls “rational empathy.”  Difficult thinking about DH, at least for me, has likewise foregrounded the importance of the interactions I have with my students as they are maintained and facilitated by our classroom technologies, and how this often seems to put my students at what they feel is a disadvantage.  For Hoole, studying what he calls the “representations” in the picture book is an intuitive activity, in that it is more or less the same as seeing or imagining the things themselves.  The technologies at work in my online teaching, however, seem to throw into question precisely the gaps between what my students see or read, what I write on our wiki pages, and what they hear me say in our videoconferences.


I plan on disseminating a survey to my students before the end of the semester, in which I’ll ask some particularly pointed questions about their experience in the class, and try to deduce a more evidentiary basis for what is right now a hunch.  What I suspect happened is something that supports the old platitude, you don’t really know what you have until it’s gone.  That is, certainly my students had expectations for what an online course would be and how it would function.  Maybe some of them even relished the idea of never having to see me face to face.  Maybe some of them thought it would be easier than a normal course, precisely because it was technologically mediated – we must keep in mind that our students may be as prone to facile thinking about the digital as we are.  But on the other hand, I recognize that I myself am an intuitive and a familiar piece of classroom technology that seems to have malfunctioned: from a student’s perspective, the online instructor is like a volume that is always checked out of the library, and can only be read in 15 page chunks on Google Books.  As I continue to the end of this semester, then, I know I must work in new ways to identify my students’ reach and apply myself to it, and to keep in mind the difficult thinking we all must do – students and instructors alike – in the weeks and years to come.

Conspiracy and ‘False Activity’ for the Gamers

I do not have the time, desire, or stomach to completely recapitulate for you the queasy mess that is the ongoing clusterfuck called “GamerGate.”  Here’s an overview.  Additionally, I will defer to two of the finest critical voices on games I’ve encountered, Liz Ryerson and Daniel Joseph, who between them explain quite well the dynamics of the whole thing.

On Twitter, Jason Hawreliak observed that while the hubbub seems to have died down significantly since Zoe Quinn laid out some harsh justice, the fact remains that many die-hards still populate the hashtag, harassing devs and writers (including, still, Quinn herself), and generally hoping to either weather the storm of their disgrace or somehow effect a resurgence in the misplaced anger that fueled this particular hate machine to begin with.  While you’re at it, read Zoe on Cracked about her experiences.

As Jason noted, one thing this means is that the conspiracies born amid the earlier stages of the debacle have become increasingly elaborate and abstruse.  This makes sense, as I say in my reply to him: conspiracy theories aren’t made to be disproved but actually revised and reincorporated into an overarching mythology of conspiracies, providing the thinker with any number of ways to “explain” particular facets of the world.

John Brindle observed how the logic of the conspiracies, the searching, sorting, and winnowing of evidence, has seemed to dovetail almost effortlessly with the logic of playing a videogame.

As I say in my response to Jason’s tweet, I tend to conceive of this conspiracy-weaving through a psychoanalytic lens, and in particular through the idea of “false activity,” which I fork from Zizek.  Conspiracies are a method of constantly delaying “action” because there is always more to the situation then at first seemed apparent: we cannot do anything yet, because we haven’t sounded the depth of our imagined rabbit hole.  And this is particularly important since, in pursuing the bugbears of conspiracy theories (unscrupulous women game developers, or fluoride in your house’s tap water) you ignore more pressing, institutional issues: the fact that mainstream ‘games journalism’ has always been figuratively in bed with AAA developers, or that your civil liberties are being daily eroded by militarized police and an oligarchic government without any help at all from mind control agents in your kitchen sink.

False activity is a necessary corollary to “interpassivity,” an idea which Zizek himself forks from philosopher Robert Pfaller.  To contrast with the more acknowledged idea of “interactivity,” interpassivity is when objects begin to do things for us, in our place, rather than at our behest (this latter condition being the ideal of ‘interactivity’).  Zizek’s go-to example is the laugh track in a sitcom: the show itself laughs at its own jokes, so we don’t have to, and thus some of the heavy burden of paying strict attention is alleviated.  We are in fact allowed to “unwind” or relax.

For Zizek, then, “false activity” is the point at which the subject (sometimes willfully) misrecognizes an interpassive relationship for an interactive one, and vigorously attempts to treat it like one, but in so doing really prevents any action from taking place:

people not only act in order to change something, they can also act in order to prevent something from happening, so that nothing will change. Therein resides the typical strategy of the obsessional neurotic: he is frantically active in order to prevent the real thing from happening.

Here we arrive at my overall point: that “GamerGaters” or whatever we want to call them, are precisely in this position.  As mostly men, cisgender and heteronormative and white, the rise of socially conscious game developers, writers, queer folks, and women and PoC in gaming threatens them into a position where they feel “passive” in the arena of life where they have most often felt “active” (recall, here, points made by Ryerson and Joseph).

The result is the generally false activity of #GamerGate, of spinning wheels and ginning up controversy in the hopes that, by doing all this, absolutely nothing will happen, absolutely nothing will change.

This structure of activity is, I would further allege, one derived from (or at least strongly reinforced by) videogames themselves.  As I’ve increasingly thought and argued in my work on games, they are often profoundly tedious despite being marketed as endless fun.  As some of the voices of #GamerGate cry, reviewers often score the “fun” of a game subjectively, but rather than understanding that enjoyment is obstinately subjective, the Gaters call for more objectivity, as if the marketing copy of games (endless fun for you, forever!) could in fact be true.

Here we see the interpassive face of a medium whose primary selling point is its claim to interactivity.  To choose a rather unfavorable analogy, it seems that games have partly worked by indoctrination.  “I told you I was fun,” the game says, “the commercials said I was fun! So you definitely must be having fun!” And for a thousandth time your space marine is shot in the head by a shrieking 14-year-old on the other side of a continent.

So the “gamer” response is not to call for better games (that is, to interact with the medium and its industry, as many indie developers and writers are in fact doing — interaction with others at all has fallen under suspicion of ‘corruption’ for GGers) but rather a demand to materialize a sublime-impossible artifact of videogame advertising, and to that extent not so the industry can change, but so it can finally be what they thought it was all along: a boys-only playground.  Upon fulfillment of the above conditions these people, in their vociferous cries for action, can in fact remain the passive consumers they have always been, and always wanted to be.

“Tear it outta the sky!”: Stuplimity, Affect, and Games

In the fall of 2008 I was a sophomore in college.  I had a friend who reliably purchased hot new AAA videogames, and it was our custom after dinner to retire to his room and play something or other for a few hours, rotating play responsibilities while the rest of us chatted, made remarks about the game, discussed classes and campus life, and so on.

This friend purchased StarWars: The Force Unleashed.  At a certain point in the game, you are tasked with pulling a Star Destroyer out of the sky with the Force.   As with so many other parts of the game, it was a lengthy quick-time event that made an elaborately choreographed scene marginally interactable.  Here’s a video:

What was important about my friend’s game is that it glitched.  At the final stage of the event (about 3:30 in the video) the player avatar locked into place, the icons indicating the player needed to use the analog sticks appeared, and a crackling disembodied voice commanded him to “Pull it outta the sky!

And then nothing else happened for probably more than an hour.

The game didn’t freeze, the music didn’t stop, my friend could still move the analog sticks and influence the movement of things on screen, and every few minutes the game would remind him, as if he had somehow wandered off or forgotten, to “Pull it outta the sky!

My friend, a tenacious game-player if there ever was one, kept at it.  We watched as he became increasingly agitated, leaving him to stew in silence as our conversation drifted away from him and the television in the center of the dorm room.  On the screen was something that I imagine we might only ever see again if Samuel Beckett somehow got a job writing one of the new Star Wars films: a snarling Jedi caught in cinematic stasis, a waggling Star Destroyer suspended indefinitely in front of him while the brass blared heroically all around: —We must pull it out of the sky!Oh, but we couldn’t.  —But if we did?Could we?

I don’t know how long it took us to suggest to our friend that maybe it was a glitch, and to reload from a prior save, but this event became sufficiently notorious in our social group as to constitute its own in-joke, a tendency to shout a misremembered “Tear it outta the sky!” at one another during moments when we were feeling frustrated, irritated, or overwhelmed.

This has all been a roundabout introduction to the issue of affect and games, and in particular the ways in which videogames often seem to confound the epic and exhilarating with the banal and irritating.  This precise confusion has been described by Sianne Ngai in her book Ugly Feelings, under the neologism of “the stuplime” — a bizarre crossroads of the unpleasant, thick, and “stupid” with the vast and terrifying wonder of the Kantian sublime, where the human mind is supposed to successfully recognize its own inability to grasp the totality of, say, a mountain or a storm, and then takes comfort in its own self-conscious boundedness.

Contrasted to that, as Ngai explains it, the stuplime is

…a bringing together of what “dulls” and what “irritates” or agitates; of sharp, sudden excitation and prolonged desensitization, exhaustion, or fatigue. While the Kantian sublime stages a competition between opposing affects, in which one eventually supersedes and replaces the other … stuplimity is a tension that holds opposing affects together. … Stuplimity reveals the limits of our ability to comprehend a vastly extended form as a totality … yet not through an encounter with the infinite but with finite bits and scraps of material in repetition. (271)

I obviously would like to suggest that stuplimity has something to offer the study of videogames.  I am not the first person to do so; in an insightful article on “the digital sublime,” Eugénie Shinkle invokes Ngai to describe the affect of gameplay as one of potential  stuplimity, of boring and repetitive tasks punctuated by moments of heightened attention, energy, and exhilaration: “…this suggests that we situate videogames in the context of the general waning of affect that is said to characterize postmodern experience” (6).

It is Ngai’s contention that stuplimity is a specific and symptomatic affect of our contemporary late-capitalist world, which is why it warrants the neologism.  Indeed, if she is correct in this, and if I am correct in my hunch that stuplimity describes game-experiences with an uncanny accuracy, then the fit might be because the videogame is the late-capitalist aesthetic object par excellance.

Shinkle’s search is, as I indicated, for the elusive “digital sublime,” and in the end she asserts that stuplimity will not get us there, because in games “the two affects [ie, astonishment and boredom] are not collapsed into one another but continue to exist, in tension, as discrete categories” (10-11).

Shinkle contrasts the stuplime with Csikszentmihalyi’s idea of “flow,” which accounts for the large expanses of repetitive or abstruse gameplay we may endure without actually becoming irritated.  A good example here might be how many people, at a certain time in their lives (particularly the 90s) were able to buckle down and grind through thousands of random encounters in Japanese RPGs with nary a grumble.

But Shinkle does not simply argue for flow over stuplimity, pointing out that both of them rely on an implicit notion that “the technology itself – software and interface – disappears into functionality” (8).  She then turns to the issue of a game’s “failure event,” which breaks flow.  This, then, is why games (for Shinkle) do not “collapse” the affects of astonishment and boredom, because a game that is “working” will result in an experience of flow.  What Shinkle calls the digital sublime, then, is always ultimately an accident, a moment when the game as artifact retreats from the player indefinitely:

In failure events, both the game and the technologically-enabled posthuman self cease to exist as such. Instead, the subject is confronted with a mute technological artifact – a featureless surface that bears no decipherable relationship to the unimaginably complex workings that it conceals. Contemporary digital technology lacks the capacity for representation that allowed nineteenth-century artifacts to function as sources of awe in and of themselves. As objects, contemporary digital technologies are destined for obsolescence, their production driven less by a wish to celebrate human ingenuity than by the late capitalist imperatives of novelty and innovation. (9-10)

Shinkle’s digital sublime relies not on the hopeless muddling of boredom and astonishment, but rather irruptive moments when digital artifacts at first cast us off and, contrasted with Kantian natural phenomena like storms and mountains, we recognize them as “banal” consumer products, things made for us but which exist in some inscrutable and frankly-not-very-exciting way beyond us.  (In this sense I think Shinkle’s idea resonates to some extent with Tim Morton’s idea of the hyperobject, especially as it describes the styrofoam trash that stuffs our landfills and will outlast us all.)  As Shinkle summarizes, “In the contemporary digital sublime, the experience of the limitless potential of human ingenuity is
lodged within artifacts whose material existence is fleeting and insignificant” (11).

And yet I think, in the search for the sublime, Shinkle brushes past far too quickly the potential insights of the stuplime gaming experience.  I find myself returning to the moment when my friend could not tear the Star Destroyer from the sky.  On the one hand this is precisely the failure event Shinkle discusses, the moment when the game seemed to clam up and resist my friend’s attempts to act on it or with it, and we recognize it as banal, overhyped, mass-produced Star Wars merchandise, indistinguishable from any copy in any other Xbox anywhere else.

But on the other hand, the object was not at all “mute” — the icons were there telling my friend to use the analog sticks, the game itself kept urging him to “pull it out of the sky,” the Star Destroyer bobbed like a cork in the sea, and yet despite all of this happening, nothing actually happened.

Or rather, nothing happened in terms of progression through the game.  Outside the game, my friend became increasingly and obviously angry; my other friends and I became increasingly bored and increasingly uncomfortable about our friend playing the game; in the end the experience was so affectively strong that it left its mark on our group dialect, and many years later, brought me to write this blog post.

Not only that, but games can be affectively deadening, irritating, and uncomfortable even when they work correctly.  This seems to have become more pronounced lately in  gameswriting, especially regarding AAA titles.  Consider Leigh Alexander’s excellent critique of BioShock Infinite, which describes her dismay at encountering a gamespace that is technically excellent, artistically ambitious, and yet at the same time unsatisfying and hollow:

I’m in the land of the Vox. Some shantytown. A man stands on a crate, preaching about the misfortunes of the working class. I want to snap a picture of the juxtaposition between the way I always want to listen to him and the way I am always waving a gun in his face, and so I put the controller down and held my phone up to the screen. As I am picking the controller back up, my finger slips, and I shoot him by accident.

What Alexander describes here is not a failure state.  It is the way the game is supposed to run — you are supposed to be able to shoot that street preacher.  Someone somewhere in the game’s development thought, “The player may want to shoot people on the street.  We won’t force it, but we’ll allow it to be a possibility.”  But for Alexander — who by this point is quite disenchanted with the game, anyway — it is merely one more absurd setpiece of murder in her episodic journey through a game consisting almost entirely of instances where you brutally murder strangers on the street.

We might also look at Paul Tassi’s review of Call of Duty: Ghosts.  Calling the game “modern military shooter fatigue incarnate,” the flat affect of his opening line succinctly encapsulates the stuplimity of videogames: “I’m in space. I’m shooting a machine gun, in space.”

His follow up: “I don’t know what else is left to do at this point.”

One might fairly object that these are unfavorable reviews.  They are, of course, rhetorically positioned to figure their objects as stuplime; in a “good” game, or at least a game the reviewer likes, there is less attention devoted to issues like this because the player either buys into the game’s absurdity in a sincere way or the gameplay (which, almost definitionally, will be somehow repetitive) produces the “flow” necessary for “proper” enjoyment.

To further crowd an already populated essay, let me point out the phenomenon of the cynical video review — I think most specifically of Ben “Yahtzee” Croshaw’s Zero Punctuation, or James Rolfe’s Angry Video Game Nerd.  Croshaw’s persona as a reviewer is predicated on him being sardonically unimpressed with most games he plays, delivering his comments in a brisk but generally disinterested tone.  The Angry Video Game Nerd’s format, while focusing on vintage or classic games, relies on being similarly unimpressed, though Rolfe’s delivery is considerably less manic.

In both cases the reviewers repeatedly present their positions of darkly humorous cynicism as being a viable viewpoint to take in regard to videogames.  They form a counterpoint to the “hype machine” of the enthusiast press and, I suspect especially in Croshaw’s case, earn a sense of “authenticity” in their opinions among viewers.  Both seem to be chronically on the verge of echoing Tassi’s weary observation: “I don’t know what else is left to do at this point.”

What is left to do?  The answer is actually quite simple: review another game!

What if the tired cynicism of the video reviewers, the tone of floating distress that invades written reviews of bad games, illuminate something fundamental to the aesthetic experience of videogames?

Ngai again:

Inducing a series of fatigues or minor exhaustions, rather than a single, major blow to the imagination, stuplimity paradoxically forces the reader to go on in spite of its equal enticement to readers give up … pushing us to reformulate new tactics for reading. (272)

Is this not how games function?  Are not all games just a little bit boring, chains of “minor exhaustions,” challenges and puzzles and unfamiliar mechanics, requiring us to chip and click and press and shoot our way forward again and again and again?  In reviews where games are figured as boring or bad, are we not simply seeing highlighted and disparaged the very mechanics that, in another configuration in another game, might become a part of the “flow” of gameplay, perhaps unpleasant or imperfect but “natural”-feeling enough to keep us from giving up?  Might not any “bad” game mechanic, if pulled into the proper assemblage, or experienced by a certain player, come off as rather tolerable, if not outright “good”?  In short, what if all games are basically stuplime?

I have already brought up the example of the JRPG as a game whose tedium might be subsumed by the trance-like state of flow.  I was one of those players who experienced their share of very grindy JRPGs in the 90s, and I hold fond memories of all of them — but I question whether this was the result of flow, or rather the result of a selective memory and a selective fondness.

I certainly don’t think I have the patience to play through Final Fantasy VIII again, despite the fact that it’s one of my favorite games.  I remember, in fact, being bored and irritated by extensive bouts of grinding in it and just about all the JRPGs I played.  I continued with these games so long as it seemed possible to make the next non-grindy section of the game more palatable; I continued to play JRPGs so long as I had a taste for melodramatic stories about teens with nebulously environmentalist or anti-fascist messages looking sad and/or beautiful while staving off cosmic catastrophe.

This is all to say, pace Shinkle, that astonishment and boredom do in fact collapse into each other during gameplay.  There are certainly instances where one is winnowed from the other: boredom overcomes all and the player quits, or those moments of heartfelt wonder and astonishment.  But I would argue that, for the most part, games are experienced precisely in the middle of these two extremes.  Games are filled with “gray” time — unremarkable time, filler, which we may or may not recognize as such and may or may not care about, given a variety of external factors.  The game does not cede to a pure functionality, but rather the player’s affect and attention exist in tension with what the game asks or requires the player to do.  (Consider the varied responses to David O’Reilly’s Mountain [scroll down to the website logos for reviews] for some rather lucid expressions of these tensions.)

My friend did not restart Star Wars: The Force Unleashed immediately because he genuinely could not tell he had entered a failure state.   It bore none of the more egregious marks of a glitched game, and it had occurred in the already vastly narrowed playspace of a quick-time event, without any of the hallmarks of that event being failed.  We waited so long to restart because we could not “comprehend a vastly extended form as a totality” — ie, because we were inoculated to games as complex systems designed to present challenges, to urge us to “stop” or “give up the fight,” while hiding the fact that progress was indeed possible, implicitly demanding we try anyway, that we attempt a new tactic, keep attempting the QTE, or (eventually) reload a save.

Is there, then, a fundamental way in which games teach their players to embrace a certain type of stuplimity: “Do this.  Keep doing this.  Now do that. Oh, you messed up — try again.  Yes, again.  Do it again.  You may not like it, but the cool stuff’s ahead — I promise”?  And perhaps the player sees it — or thinks she sees it: that cool stuff, that Thing, the payoff, the promise of affective astonishment hovering just ahead, bobbing helplessly in the air, waiting to be pulled down to her with just the right combination of button presses.

Blogging the Quals: Oops

Oops! I guess I’m still blogging the quals, even though I forgot to blog them all for the past several weeks!  I became too obsessed with reading and getting stressed out due to my upcoming move.  But in good news, I finished reading last week!  Woo!

I kept all my notes in a Twine document.  Here’s what it looks like:


WOW. Okay.

Right now I’m busy drafting my exams questions, and am scheduled to go through the exam itself on September 24th.  Excellent.  I’ll leave you now with another picture, a long quote from a source, and a brief reflection.


In the introduction to Subject and Object in Renaissance Culture, Margreta de Grazia, Maureen Quilligan, and Peter Stallybrass note that in vanitas paintings, as in most of the vanitas tradition, objects are collected, lumped, and represented precisely to underscore their transience in relation to the absent subject:

By their title (vanitas vanitatum, Eccles. I.2) and by the symbolic encoding of things represented (signs of transience and morality), they exhort subjects to renounce objects.  But can such a sequestering hold?  We have reproduced N.L. Peschier’s unusual vanitas painting [above] precisely because the subject finds its way back into the picture, at the top of the pile of objects, in the upper-right hand corner, head tilted like the skull beneath it.  Even in more typical versions, the omnipresent skull itself serves as a reminder of the common materiality of subjects and objects. (1)

All seems well and good here.  The authors  point out the ironic effect of paintings like these: that they themselves incite what they disavow, by becoming “collectibles” for  educated elites, or later on, museums, thus further suggesting an inextricability of subject and object in particular as an effect of the artistic process.  In fact, we might be tempted to say the subject is not even “absent” since, as any good Foucauldian reading tells us, the subject is constructed virtually by the painting, a medium for the gaze that gives the object its meaning.  Hold that thought, though.

I am curious about the claim that “the subject finds its way back into the picture.”  In the hard copy of the book I read the painting was reproduced in black and white, and hence harder to suss out, but the image I inserted above makes its abundantly clear that the “subject” that seems to appear in the upper-right corner is not a human subject at all, but a statue: another piece of artwork, bronze or perhaps terra cotta, whose pose mimics the stony human skull below it.  Directly horizontal to this statue, we discover another “human” reappearance, a sketch posted on the wall (perhaps a Peschier self-portait?).  Neither of these figures meet our gaze; they turn away, to  elsewhere, to spaces outside the frame: to places we cannot ever, will not ever see.

So I will go one step further than simply saying this painting becomes what it renounces: I want to say that it embraces it.  It embraces its own objecthood.  The things in the painting (subjects, in one sense of the word, a sense that cannily denies the necessity of the human) exist beyond us; the painting itself will exist beyond its painter, its collector, the school group that sees it in the museum.  The viewer virtually constructed by this painting is one who meets no sympathetic eye.  Rather than urging us to disavow art, Peschier’s painting suggests the ways in which art disavows us.


Blogging the quals


You may remember that some point I mentioned I am in grad school.  Well, I am now in a position where I am preparing to take my PhD qualification exams which, in case you’re not already an English grad student or PhD yourself, means I’m going to spend this summer reading something like 150 books.

These will be diverse, though of course largely oriented around my period (Shakespeare and early modern drama) and my theoretical concerns (performance, Renaissance humanism, intellectual history, and contemporary “posthumanism” as it might be broadly construed).  I don’t have any particular interest in saying I’m one type of scholar or another, but I am highly inclined toward what medievalist Eileen A. Joy has called “weird reading.”  Let’s take a look:

Any given moment in a literary work (all the way down to specific words and even parts of words, and all the way up to the work as a whole), like any object or thing, is “fatally torn” between its deeper reality and its “accidents, relations, and qualities: a set of tensions that makes everything in the universe possible, including space and time,” and literary criticism might re-purpose itself as the mapping of these (often in- and non-human) tensions and rifts, as well as of the excess of meanings that might pour out of these crevasses, or wormholes. We’ll call this reading for the weird, which is fitting when you consider that the word ‘weird’ (traditionally linked to ‘wyrd,’ or ‘fate’) is related to the Old English weorðan [‘to become’], rooted in Indo-European *wer– [‘to turn, bend’]. This will entail being open to incoherence as well, as one possible route toward a non-routinized un-disciplinarity that privileges unknowing over mastery of knowledge. The idea here would be to unground texts from their conventional, human-centered contexts, just as we would unground ourselves, getting lost in order to flee what is (at times) the deadening status quo of literary-historical studies at present, aiming for the carnivalesque over the accounting office.

I agree with the general sentiment here.  Joy says in a footnote that she does not mean to jettison historicist criticism entirely, and indeed, I find my current work an attempt to revive some of the stranger, less disciplined qualities of history-making that the Foucauldian turn of New Historicism deanimated.

In order to pass the time by doing something other than simply reading and worrying about my exams this fall, I am going to do my best to post weekly or bi-weekly updates here listing what I’ve read since my last post and, perhaps, some scattered thoughts, impressions, or quotes.  (In this sense I’m taking my cues from when I was in a similar situation as a senior undergrad.)

So let this be the inaugural post in my “Blogging the quals” series.  I’ll list below the eclectic mix of what I’ve read so far this semester, to give you an idea of what’s to come in full force later on.

Lyric poetry (selections)

Lanyer, Aemilia (Salve Deus Rex Judaeorum)
Sidney, Philip (Astrophil and Stella)
Spenser, Edmund (Shepheards Calendar, Amoretti, Epithalamium)
Wroth, Mary (Pamphilia to Amphilanthus)
Wyatt, Thomas (Sonnets)


Ford, John – ‘Tis Pity She’s a Whore
Shakespeare, William – Antony and Cleopatra
— Love’s Labours Lost
Webster, John – The Duchess of Malfi

Period/Field Criticism

Charnes, Linda – Notorious Identity: Materializing the Subject in Shakespeare
MacKay, Ellen – Persecution, Plague, and Fire: Fugitive Histories of the Stage in Early Modern England


Bogost, Ian – Alien Phenomenology, or What It’s Like to Be a Thing
Latour, Bruno – We Have Never Been Modern
Zizek, Slavoj – The Sublime Object of Ideology

Spelunky, Replayability, and Performance on FPS

A few months back I wrote a speculative post on how performance theory can help us understand the idea of “replay value” in videogames.  I was shortly thereafter pinged by Steve Wilcox, editor at scholarly games writing site First Person Scholar, to ask if I’d like to work on something of a similar theme for the site.  I did, and so have turned out a little thing on those themes, in the context of my play of the game Spelunky.  You can go read it now!

My thanks for proof-reading the article go out to my pals Spam, Victor, Dan, and Alex Pieschel.  Speaking of Alex, he and Zolani Stewart recently started a little joint called Arcade Review, which provides some cool videogames crit that you might wanna check out.

I’d also like to thank Dr. Gerlad Vorhees for his reply to my piece, which provides me with some avenues of research that would otherwise be unknown to me (coming at games studies, as I do, rather catty-corner).

As luck would have it, Brendan Keogh a month or two back published a critique of videogames criticism that I in large part agree with, and makes in a different way many of the points my FPS piece led me toward, so that can also go into your recommended reading.  Also keep in mind comments by Ian Bogost and Daniel Joseph, as well as this thorough reflection and roundup by Zoya Street.  What Keogh is gesturing toward seems to me very similar to what Peggy Phelan advocates in her theory, an idea of “performative writing” that attempts to capture through “thick description” (as Keogh at one point calls it) the embodied experience of performance/gameplay.

Relatedly, in a few weeks (?) I’ll hopefully have another post here on the replayability issue, because there was an entire half of the FPS article that I imagined but, due to word limits and being sensible, did not write.  There’s nothing fundamentally new, but rather, I want to focus more on what it means to “performatively write” about a videogame (for my money this piece by Leigh Alexander on Bioshock Infinite is a great example, in case you’re tired of me).  But I also want to demonstrate a different kind of re/play experience that I find myself repeating for similar yet ultimately distinct ways from my account of Spelunky.  I tend to work and think in terms of illustrative contrast, so this is helpful for me.  But it will also demonstrate, I think, relayability in its more bizarre, persona, idiosyncratic, irrational mode.  This relates to something Vorhees brings up — a possible desire to escape the buying-the-next-game cycle — though as he concedes, this isn’t something we can necessarily (so to speak) bank on.

Thanks for reading!