Introduction for The Comedy of Computation

The Comedy of Computation
Or, How I Learned to Stop Worrying and Love Obsolescence
Benjamin Mangrum

THE ONE AT THE BEGINNING

DURING APPLE’S FIRST PUBLIC DEMONSTRATION of the Macintosh in January 1984, Steve Jobs lifted the computer out of a bag to exhibit its lightness and portability. He demonstrated the machine’s user interface and some of its visual capabilities and then said, “We’ve done a lot of talking about Macintosh recently. But today, for the first time ever, I’d like to let Macintosh speak for itself.” In a synthetic voice, the computer spoke: “Hello, I’m Macintosh. It sure is great to get out of that bag.” The audience laughed, and then the computer took a dig at one of Apple’s chief rivals: “I’d like to share with you a maxim I thought of the first time I met an IBM mainframe. NEVER TRUST A COMPUTER YOU CAN’T LIFT.”

Apple had incorporated speech technology from Samsung that allowed the computer to voice a script that also appeared on its monitor. The computer’s “first time ever” spoken communication was a series of dry jokes. These jokes, while prosaic and not very funny, surprised casual observers in 1984. Few expected the machine to speak, much less for it to voice dry humor at the expense of a competitor. The episode illustrates how corporations have used comedy throughout the commercial history of computing to present the computer as a sociable technology. Humor enabled the audience to view the Macintosh as an ordinary object, not some inhuman or overly complicated tool. It was as if Apple’s marketing team thought the computer first needed to become comic before consumers would regard it as personal.1

The Macintosh’s jokes were a surprise to those who thought that comedy and computation had nothing to do with one another. In some ways, this was not an unreasonable assumption. Comedy, after all, names a spontaneous reaction like laughter but also dramatic forms like romantic comedies and sitcoms. Comedy has other meanings, too: madcap performances, absurdist humor, and banal nonsense. It is often the province of jokers and stooges. The term computation, in contrast, conjures up mathematics, logic, and cold rationality. It trades in binaries and inflexible processes. Especially when automated through a computer—my focus in this book—computation may seem to be the antithesis of comedy.

Yet this book shows how comedy and computation in fact have a long-standing, if somewhat fractious, relationship. Like Felix Ungar and Oscar Madison in Neil Simon’s The Odd Couple (1965), comedy and computation are an incongruous pairing—a coming-together of opposites that iterates across the social history of technology. This coming-together often responds to widely held perceptions about the hazards of computing. For example, Apple’s 1984 marketing campaign for the Macintosh was tailored to its coincidence with the dystopian year depicted in George Orwell’s Nineteen Eighty-Four (1949). Orwell’s novel imagines a totalitarian regime of surveillance through advanced technology. Popular films like Stanley Kubrick’s 2001: A Space Odyssey (1968) and George Lucas’s THX 1138 (1971) had similarly depicted computers as menacing machines that sacrifice human well-being while coldly executing their programming. Rather than dismissing these associations, the marketing around the first Macintosh took advantage of them. The company’s now-famous Super Bowl commercial invoked Orwell’s novel directly, as well as other popular dystopian imagery, but then it insisted that Apple’s technology would hail a different kind of future, one characterized by freedom, creativity, and decentralization.

The Macintosh’s dry humor during its first public demonstration tapped into this countercultural vision of spontaneity and social well-being. Rather than portraying the machine as a scientific instrument or a serious business technology, the jokes suggested that nonspecialists could understand and work with the computer. Comedy became a sign of the machine’s ordinariness.2 By performing its facility with comedy, the Macintosh appeared to be compatible with the everyday wants and needs of ordinary people. Comedy thus shifted the cultural meaning of the machine, transforming it from a tool of technocratic elites into an individual user’s amusing conversation partner.

The transformation of the computer through comedy is a recurring theme in the pages that follow. I will show that one of the most common mechanisms for this transformation is the imagery of coupling—that is, forming some sort of attachment with or through computers. Coupling is, of course, one of the primary images of social harmony in romantic comedies, but many other kinds of comedy also trade on the symbolism of coupling: for example, a union with the divine or the divine’s adjunct is the principal image in Christian comedies like Dante’s Comedia (c. 1309–20). We can also think about television strategies like laugh tracks or an actor’s significant glance at the camera as a kind of coupling, a joining together with the audience through shared norms that cue a comic response. Coupling, in these instances, is figuratively and affectively social.3

One of my aims is to analyze the kinds of sociality imagined when writers, filmmakers, software engineers, and technologists couple comedy with computation. When these producers of culture and technology bring comedy together with computing, they present images of social life in an age of advanced technology. As I have already suggested, these images often involve the transformation of the computer through comic plot structures, conventions, attitudes, and modes of thinking. Yet computational technologies have produced wide-ranging transformations of their own—a phenomenon that I describe as becoming computational—and one of this book’s central claims is that comedy has provided a generic form for the experience of these sweeping transformations.

Let me offer another example from a 1958 article in Forbes magazine examining the uneven and often-awkward assimilation of computing machines within corporate life in the United States. The Forbes reporter describes this assimilation by drawing out an analogy to a then-popular romantic comedy: “In his Broadway comedy success, The Desk Set, playwright William Marchant spent 130 minutes poking fun at the mutual problems that come up when electronic brains and human beings try to accommodate one another. Judging from Marchant’s situations, neither thinking humans nor thinking machines are going to have a very easy time of it.”4 This reference to The Desk Set illustrates the common midcentury sentiment that people and computers are somehow mismatched; that their union within corporate life is an incompatible coupling; and that, as a result, each will need to “accommodate” the other. Rather than depicting a “contest” between men and women, as Susanne Langer describes the “rhythm” of comedy, or between older and younger generations, as Northrop Frye contends, the Forbes reporter imagines a screwball comedy between “thinking humans” and “thinking machines.”5 The members of this odd couple need to change for the sake of one another. Comedy thus provides a culturally legible form for making sense of the conflicted experience of computing technology.

If Apple’s use of comedy to make the computer seem personable illustrates one rationale for the coming-together of comedy and computation, the Forbes article attests to a different set of pressures forging this union. Starting with the Census Bureau’s use of a computer called UNIVAC in 1952 and accelerating with the delivery of IBM’s 700 series of computers in 1955, many members of the professional-managerial class found themselves in an ongoing relationship with computational technology.6 In the first decades of the public life of the computer, accountants worried they would lose their jobs; researchers feared they would be replaced by machines; and managers wondered if computers would supplant their expertise in coordinating business functions. The “computer revolution” often felt like a coup against white-collar work.7 Yet the Forbes article illustrates how, amid this epoch of social and professional crisis, various forms of comedy regularly provided resources for imagining how humans and computers might “accommodate one another.”

While Apple’s marketing of its first Macintosh flirts with a kind of techno-utopianism in which ordinary people find that advanced technology is compatible with their interests, the Forbes article presents the relationship between people and computing in the image of comedic conflict. As the reporter puts it, neither “thinking humans nor thinking machines are going to have a very easy time of it.” The journalist’s most immediate point is that labor strife and manufacturing costs might prevent corporations from assimilating computers more widely. But the journalist is also making a lighthearted joke that people, after coming into proximity to computers, might wish to decouple their professional lives from the technology. The article’s understanding of comedy allows for the possibilities of both attachment and negation, a final union or a messy breakup.

The possibility of decoupling from technology can, of course, lead in noncomic directions. In fact, many dystopian narratives take oppositional postures toward computing to signify their criticisms of a society oriented around hyperefficiency and advanced technology. This is precisely the sentiment in a little-known short story by Henry Slesar titled “Examination Day,” first published in Playboy in 1958. Slesar’s story imagines a future in which every twelve-year-old must undergo an IQ test by a computer. Children with high IQs are deemed a threat to the state and eliminated. Large computing systems support the state by overseeing a rational but oppressive social order, which might be challenged if human intelligence were allowed to thrive among the masses. The story thus imagines the dystopian future that could result if the United States were to follow its postwar path of coupling an administrative state with the inhuman logic of advanced technology.

Slesar’s story focuses on the examination of a twelve-year-old boy named Dickie. Dickie’s parents explain to him that, on his examination day, he will drink a peppermint-flavored chemical that ensures he cannot lie. When Dickie arrives for the exam, a “multidialed computing machine” poses questions to the boy, and then the narrative abruptly—almost incidentally—notes the boy’s execution after it is determined that he has a high IQ. Dickie’s parents receive a call from the Government Educational Service notifying them that their son’s “intelligent quotient has exceeded the government regulation.” The Educational Service representative then asks the parents to specify whether they “wish his body interred by the government or . . . a private burial place.” If they choose the former, the representative explains, the “fee for government burial is ten dollars.”8

Slesar’s story takes the child as a kind of moral norm violated by the computer’s cold rationality. When Dickie sits before the computer, “lights appeared on the machine, and a mechanism whirred. A voice said: ‘Complete this sequence. One, four, seven, ten . . . .”9 In contrast to the curiosity of the child, the computer only poses standardized questions. It is only interested in what, not why. The machine appears incongruous but in a noncomic way: the whirring of the mechanism seems remote, as though occurring somewhere deep in the computer, while the machine’s surface is depicted only through inscrutable lights and a simulated voice. This depiction of the computer contributes to the story’s broader suggestion that automation is the twin of authoritarian politics.

“Examination Day” exemplifies what I take to be one of the two prevailing ways scholars have tended to think about the role of the computer in postwar American culture: the technology serves as a symbol of tyranny and conformity. (This is, of course, the inverse of the second common view, which is that the computer is a tool of productivity and futuristic prosperity.) Slesar’s story illustrates how the former view regularly depends on the figurative status of the child—a symbolic dynamic that appears in other works from the period, such as Kurt Vonnegut’s short story “Harrison Bergeron” (1961) and films like Herman Hoffman’s The Invisible Boy (1957) and Robert Butler’s The Computer Wore Tennis Shoes (1969). Hoffman’s film imagines a supercomputer that manipulates its creator, Dr. Tom Merrinoe (Philip Abbott), by capturing and then threatening the life of his son, Timmie (Richard Eyer). Like Slesar’s story, Hoffman’s film presents the child as an “obligatory token of futurity” in postwar culture.10 The child assures readers and audiences that a hopeful future may be achieved, at least insofar as threats to that future are avoided or suppressed.11 Lee Edelman argues that the future-oriented symbol of the child often has a kind of cultural double in the postwar era: the “future-negating queer.”12 In Alfred Hitchcock’s North by Northwest (1959), for instance, the “tellingly fashion-conscious” henchman Leonard (Martin Landau) poses a threat to “heterosexual love” and the starring “reproductive Couple.”13 According to Edelman, queer figures stand in opposition to an interconnected series of normative social images: the child, the couple, the future, and human compassion more generally.

It is striking how often the computer serves in a role like Leonard’s within the science fiction, suspense, and dystopian films of the twentieth century. HAL in Kubrick’s 2001: A Space Odyssey is an obvious example, with its uncanny voice and decision to kill astronauts for the sake of its programming.14 Slavoj Žižek offers a philosophical version of this image when he describes “the feeling of something unnatural” when seeing children “talking with a computer and obsessed with the game, oblivious of everything around them.”15 Žižek imagines the computer has somehow corrupted the child. Such a perspective presents the computer as both abnormal and antisocial—a perversion of the natural order of things.

My point is that even as queerness often served in the twentieth century as a marker of the antisocial, many contemporaneous representations of computing drew on this and other models of antisociality to depict how the computer might compromise the flourishing of society. Oftentimes, the antisociality attributed to computers becomes uncanny, as in Slesar’s story, but the antisociality assigned to the computer just as often generates various kinds of comedy throughout the twentieth and twenty-first centuries. For instance, in John Hersey’s satirical novel The Child Buyer (1960), a representative of the fictional United Lymphomilloid Corporation attempts to transform a highly intelligent child named Barry Rudd into a “calculating machine.”16 The company hopes to program Barry and other computer-children to solve problems that would allow humanity “to leave the earth,” a research program enigmatically referred to as the “Mystery” (210). Most of the corporation’s plans are vague and nonsensical, yet many adults embrace these plans because of their equally vague and nonsensical devotion to being “pro-business.”

Wissey Jones, the corporation’s representative, explains that the children undergo several phases of conditioning before they become like computing machines. After having their memories wiped and being entirely secluded for weeks, the children are “fed an enormous amount of data that will be needed in finding episodic solutions to certain problems in connection with the Mystery” (207). The children then have “major surgery, which consists of ‘tying off’ all five senses” (208). The “specimens” become incapable of interacting with anyone or anything except data sets. In effect, they are transformed into powerful but hyperfocused thinking machines.

Hersey’s novel uses this ridiculous scenario to satirize postwar American politics, its obsession with economic utility, and the nation’s attitudes toward education. Many of the characters represent absurd versions of these postwar viewpoints through what one reviewer dismissed as “low comedy relief.”17 For example, Jones convinces a group of senators convened to adjudicate Barry’s fate that the transformation of children into computing machines is a “thoroughly patriotic scheme” (35). A senator named Skypack gives voice to some of the most farcical ideas in the novel, particularly when objecting to Barry’s reticence to be transformed into a machine. Skypack views this reticence as a form of antisocial behavior, prompting him to remark, “I think he’s a silly, conceited boy. Probably going to be a homo” (249). For Skypack, the refusal to submit willingly to the national interest signifies a form of queerness.

The senator’s anxiety about Barry being “queer” mirrors the novel’s angst about a future in which corporations will “eliminate all conflict from the inner lives of the purchased specimens . . . to ensure their utilization of their innate equipment at maximum efficiency” (107, 204). This strange and unwieldy phrasing exhibits how the corporation’s interests are not in fact aligned with society’s. The novel’s satire invokes the antisocial connotations of queerness only to transfer them to the United Lymphomilloid Corporation’s scheme of converting children into computers. In other words, the senator’s angst about a queer boy mirrors the novel’s angst about an uncanny computational future in which education becomes only another form of data processing. This satire trades on homophobic ideas about the violation of children, displacing that threat from homosexual men onto a corporation that reveres the productivity of computing machines.

Hersey’s comic novel shares with Slesar’s “Examination Day” a sense of the computer as the technology of a future in which human experience has been debased and individuality has been redescribed as socially threatening. In both works of fiction, this process occurs as a simple inversion: the rise of the computer corresponds to something like the loss of the child, as though a society that wants “reliable and matter-of-fact calculating machines” must trade away its Barrys and Dickies as part of its bargain with advanced technology (Hersey 207). These and many other works of postwar American culture worry that a future of computers may enjoy greater productivity but will squander its innocence and curiosity. While children represent an embattled human future, computing machines serve as proxies for a threatening, uncanny dystopia of instrumental thinking. This is one form of a recurring dynamic in which anxieties about the obsolescence of human creativity and professional judgment often figured the computer as a symbol for an antisocial future. Many comic depictions of this future imagine that the rationality of the machine bends first toward tyranny but finally breaks down into absurdity. Others imagine comedy, especially satire, as a kind of humanistic capability that resists the tyranny of a computational future.

This second possibility appears in Greg Benford’s short story “The Scarred Man” (1970). In Benford’s story, multinational corporations have automated the world economy through a vast network of computing machines. The result of the global adoption of the computer is that “three quarters of the [world’s] population” have become unemployed. A character named Nigel recounts how this process created a “white collar squeeze”: “Machines could do all the simple motor function jobs and then they started making simple executive decisions, like arranging routing schedules and production plans and handling most of the complaints with automatic problem-solving circuits. That didn’t leave any room for the ordinary pencil pusher, and they started to wind up in the unemployment lines.”18 Benford’s story imagines that the automation of labor has cascading effects throughout the global economy.19 The characters view the tipping point of this disaster as the moment when computing machines, not white-collar workers, became responsible for “executive decisions.”

The character named Nigel explains to the story’s narrator that the now-unemployed professional class spends its time conspiring against computers. As he says about one group of managers, “Everybody likes to make fun of computers, you know, and they were telling jokes about them, figuring up schemes to make them break down and all that.” The narrator remarks that the reason managers take computers as comic objects is because “everyone is afraid of them.” “Yes,” Nigel responds, “I suppose that’s it. Fear.”20 Many of the story’s characters respond to the threat of obsolescence by alternating between gallows humor and strategic mockery. This turn toward comic derision expresses a professional anxiety: it arises from what Barbara Ehrenreich memorably calls the “fear of falling,” the threat of downward mobility caused by the loss of class status. Ehrenreich argues that “professional and managerial people” specifically fear the obsolescence of their knowledge because their “livelihoods depend on some combination of intellect and drive.”21

I will track the many ways that comedy has offered resources for managing this fear of falling, particularly when computers imperil the type of labor called “knowledge work.”22 This use of the comic harkens back to claims made by eighteenth-century philosophers like Francis Hutcheson and Immanuel Kant, who viewed “aleatory wit and linguistic invention [as] culturally privileged skills.”23 According to twentieth- and twenty-first-century versions of this view, comedy signifies a special domain of knowledge in its own right, one that certain classes of people invoke to keep the threat of obsolescence at bay. Some writers and thinkers examined in this book even take comedy as a philosophical marker of the limits between human and machine intelligence. We will have occasion to interrogate these ideas in subsequent chapters, but for now, I only want to note how anxieties about human obsolescence are sometimes only a proxy for an imperiled sense of class status. It is legitimate to worry about the loss of creativity or the denigration of human judgment that comes with the algorithmization of, well, everything.24 But I will also show how that worry can perform a sleight of hand in which the culturally privileged values of knowledge workers come to stand in for humanistic value itself. From this latter vantage point, obsolescence is only another name for the fear of falling.

Some readers may feel they’ve seen this movie before. After all, as Kathleen Fitzpatrick has noted, similar dynamics regarding class status structured debates about the obsolescence of literature and the “death of print” after the Second World War. Many postwar writers depicted the rise of television and other electronic media as leading to literature’s obsolescence and, by extension, the debasement of modern society. Fitzpatrick shows that this worry was often driven by forms of cultural elitism. Those who lamented literature’s loss of cultural centrality typically conflated that loss with narratives of social decline: “technologies of mechanization have produced concerns about dehumanization; technologies of image production have been greeted with concerns about illusion and ideology; and technologies of interconnection have confronted concerns about the loss of the individual.”25 Such anxieties allowed writers to portray literature as an alternative resource for humanizing the self, clarifying its relation to the social order, and restoring the dignity of the individual. This use of obsolescence, as Fitzpatrick puts it, makes the anxiety “less a material state than a political project.”26

Comedy, too, has often served as a symbolic bulwark against the computerization of work. This particular use of comedy is bound up with normative conceptions of human sociality (typically symbolized, as I’ve already suggested, through innocent children or romantic coupling). This kind of comic response to the experience of becoming computational is bound up with the more general vicissitudes of professional identity amid the cyclical disruptions within postwar institutions for white-collar labor. Anxieties about advanced computing systems often become conflated in postwar culture with the unsettling of social norms about authority and expertise.

For example, in her poem “A Sigh for Cybernetics” (1961), Felicia Lamport satirizes midcentury concerns about the development of powerful “electronic brains.” Lamport cites a warning from the influential computer scientist Norbert Wiener, who claims that “computing machines [are] now working faster than their inventors” and “may go out of control and cause widespread destruction.” Lamport responds to this news item with the following verse:

These mechanized giants designed for compliance

Exhibit their open defiance of science

By daily committing such gross misdemeanors

That scientists fear they’ll make mincemeat of Wieners.27

The use of Wiener’s surname sets up a pun on “mincemeat” while also playfully calling attention to the gendered character of the discourse about obsolescence. Despite being “designed for compliance,” computers threaten to make obsolete the scientists that the poem genders through the metonym “Wieners.” The computers refuse to submit to men who insist on unflinching obedience. The computer again signifies antisocial disruption, although Lamport uses these connotations to deliver lighthearted criticism of the patriarchal norms in science and technology.

The roles assigned to computers in postwar culture often generate forms of comedy that are inextricably linked to anxieties about the coming obsolescence of knowledge work. Obsolescence thus serves as an important proxy for the deeply conflicted attitudes toward computing technology in postwar American culture. Across its cultural history, the computer has signified negative forms of sociality but also the possibility of connecting to others and accommodating ourselves to the changing character of work. Within this multifaceted cultural position, the computer seems to deserve ridicule but also solemn admiration; it shores up normativity but also queers the normal; it threatens disruption but promises ease and efficiency; it supports authoritarianism while undermining professional authority. These clashing views of the computer can yield the kind of uncanny horror we see in stories like Slesar’s “Examination Day.” But the trope of the odd couple, together despite their conflict, can also be very funny—a dynamic often used to figure humans and computers in popular films, such as the supercomputer TARS and Joseph Cooper (Matthew McConaughey) in Christopher Nolan’s Interstellar (2014), Phil (Adam Devine) and his smartphone AI in Jon Lucas and Scott Moore’s Jexi (2019), and the eponymous human and robot in Jim Archer’s Brian and Charles (2022). These films find pleasure in a social world in which machines behave like jealous lovers or petulant partners.

Just as the social expectations placed on computing are often conflicted, so, too, are the forms of comedy shaping the experience of the technology. What I am calling the comedy of computation is not a unified or univariate category of experience. Yet, at the same time, I am arguing that the spectrum of comedy’s norms and conventions gives a generic structure to the phenomena examined in the following pages. I argue that this genericity provides forms for making a computationally mediated social world seem more habitable, even as it also provides tools for criticizing and objecting to that world. This is part of comedy’s power: it can be “transideological,” to borrow Linda Hutcheon’s description of irony.28 Laughter can be a sign of racist superiority; comedies can habituate us to the status quo or affirm normative arrangements of the social; comic plot structures can portray those arrangements as desirable and logical outcomes of human striving. Yet comedy can also challenge, deflate, and ridicule authority. It can reorient us within familiar experience and prompt us to question what we take for granted. Comedy contains multitudes, and most of them hate one another. It is a genre of conflicted experiences, often even when those conflicts resolve in the much-maligned “happy endings” of romantic comedies.

Such versatility justifies another major claim in this book. I describe the comedy of computation as a genre of experience, which is in part a way of saying that comedy has served as a phenomenological model for the experience of becoming computational. I recognize this claim requires some unpacking. First, let me say more about the phrase becoming computational, which often serves as a shorthand in science and technology studies for “the assimilation and coupling of different social forces with computers.”29 This phrase connotes the muddling of abstraction and ordinariness that lies at the heart of my argument. Becoming computational refers to major structural changes that seem to make personal agency irrelevant, but it also captures how some of our most intimate experiences of everyday life have been changed by computing technology.

Consider, for instance, how the ordinary experience of shopping has been transformed by computing. Computational technology at checkout counters gathers data from credit card use.30 Once a clerk or consumer uses the card, the machines and cloud-based software programs execute tasks that no longer involve human agents. The checkout machine relays the location, time of day, items purchased, and additional data to other machines that aggregate the information and then create a “spatial history of consumption” that other software programs utilize.31 These programs may monitor for fraud, evaluate credit worthiness, track consumer behavior, or feed the aggregate data into marketing profiles.32 As ordinary payment methods have become computational, the mundane routines of consumerism have become assimilated within vast systems of data analysis.33 This instance of becoming computational seems fundamentally impersonal, despite the fact that it begins with an individual’s mundane activity.

Another version includes how American education has increasingly become computational, not only in its pedagogical methods (a device in every child’s hands) but also in its constitutive imagery.34 The paradigmatic expression of this change is the idea that education ought to inculcate “computational thinking,” an idea that originated in the 1980s but underwent a renaissance in the first decade of the twenty-first century after the computer scientist Jeanette Wing wrote an influential essay arguing for the importance of computational thinking to secondary education.35 Wing describes computational thinking as “a fundamental skill for everyone, not just for computer scientists. To reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability.” Wing offers several “everyday examples” of this newly urgent set of skills: “When your daughter goes to school in the morning, she puts in her backpack the things she needs for the day; that’s prefetching and caching. When your son loses his mittens, you suggest he retrace his steps; that’s back-tracking. At what point do you stop renting skis and buy yourself a pair?; that’s online algorithms. Which line do you stand in at the supermarket?; that’s performance modeling for multi-server systems.”36 The search for mittens is not like back-tracking but is back-tracking; deciding whether to buy skis is not similar to online algorithms but is online algorithms. The lack of simile implies that computation is not merely a tool for disciplinary science but captures the fundamental character of everyday life.

It is possible that Wing’s point is not to redescribe human social behavior as computational but only to reorient how we think about computing and its relation to familiar experience. If she is being metaphorical, her metaphors invite us to see the ordinary in different ways by extending computation beyond the hardware and software of present-day technology. But as she says in a different essay, computational thinking “does not require a machine.”37 Computational thinking anticipates a future in which the skills and technologies of the present will become obsolete, and students will be prepared by adopting habits of mind suited to the frontiers of computing. Why place so much emphasis on this or that coding language when, in twenty years, that language, many related technical skills, and their corresponding machinery may no longer be used in computer science? Educators, so the thinking goes, should frame their work as the inculcation of certain intellectual dispositions. Computation describes the character of these dispositions.

While Wing focuses on educational practices, her “everyday examples” illustrate how many of the basic rubrics we use for understanding social experience have become computational. The computer has become more than a tool for everyday use; it has become an image for the ordinary itself. Computational thinking imagines the transformation of social life by learning how to reason like, but also process information through, computing machines. In this way of thinking, computing is not merely a meal ticket for undergraduates in a digital economy; it is a repository of metaphors for imagining the social.

By positioning the algorithms running in the background of grocery purchases alongside Wing’s “everyday examples” of computational thinking, I am trying to establish how the process of becoming computational includes seemingly incommensurate scales of experience: the abstract and the ordinary, the impersonal and the everyday, the inhuman and the intimate, the convenience of mediated transactions and the complexity of data processing. The notion that these disparate scales have been “coupled” with computers further illustrates why comedy has often served as a phenomenological model for these sweeping transformations.38 It’s as though when we come to describe how computing has affected our social lives, we cannot “resist the seduction of an analogy,” as Sigmund Freud once put it.39 The image of two people coming together provides a form for the joining-together of computers and the social world. The idea of coupling contains the possibility of intimacy but also frames failure as a kind of alienation, a fall into disaffection that often arises with the experience of becoming computational. Coupling may not be the only metaphor for figuring a union with technology, but I hope to show in the following pages that it has been a historically pervasive and culturally potent one.40

I have been claiming that comedy provides a generic form for the experience of becoming computational, but I suspect that yoking together genre and experience may strike some readers as equally incongruous as the pairing of comedy and computation. This may be because the term genre often signifies an abstract taxonomy—a bloodless categorization of culture—while experience may bring to mind the individual, singular, or irreducibly particular.41 Joining the two might strike some as implausible or contradictory: Can the singularity of experience be categorized in abstractions without distorting its distinctive qualities? If experience were somehow generic, would such experience effectively be artificial—a kind of compromised way of being in the world?

I am keeping these possibilities available by describing the cultural phenomena in this book as a genre of experience, but the phrase is also a distillation of my view that genres can operate as phenomenological models: they provide structures for thought and feeling; they set up interpretive expectations not only for narratives and media but also for being in the world.42 Genres are by no means the only phenomenological models we use to make sense of ordinary life, but I hope to show in this book that comedy has been an especially important (if not also critically underappreciated) model for the experience of becoming computational.

This approach contributes to the reworking of the concept of genre that many other scholars have undertaken.43 At one time, critics would ascribe a “nucleus” of content to classic genres like tragedy and comedy.44 Jacques Derrida characterized this view as the “law of genre,” but Alastair Fowler and many others long ago dispelled the illusion that genres are fixed or essential categories.45 Most current theories view genre in ways that are historically variable and contextually sensitive. According to this perspective, we do not need to tether comedy to some archetypal structure (e.g., Northrop Frye’s “mythos of spring”) or explain every chuckle as the manifestation of the unconscious (e.g., Freud’s so-called relief theory of humor).46 Genres do not conform to universal explanations. Alexander Leggatt expresses this view with a simple dictum: “There is no such thing as comedy, an abstract historical form; there are only comedies.”47

Such a view expresses what I describe as a nominalist theory of genre. The basic tenet of this view is that genres are social and historical categories of description that lack a core identity. Appealing to a canon of referents cannot yield a stable or consistent set of criteria for determining membership in the genre. There is no timeless reality called Comedy, no essence to which we can appeal when determining what counts and what doesn’t. I share this view, and because of it, I am not interested in policing comedy’s boundaries or positing some central “nucleus” shared by all instances of the comedy of computation. I often make comparative claims across media. I don’t fret about, say, moving from an analysis of romantic comedy to a continuous point about the construction of humor. To my mind, the nominalist theory of genre allows for this kind of comparative or transverse analysis, because otherwise unrelated speech acts and media phenomena can nonetheless share usages of the comic.48 Genres include various and even conflicting cultural forms.

There is, however, a school of thought that goes a step further than the approach I have been describing. This school—let’s call it strong nominalism—would claim that, because only particular instances of a genre exist, we can only make intelligible claims about a small number of individual artifacts within a narrowly defined historical period. Strong nominalism is skeptical of diachronic comparison and categorically opposed to aesthetic claims not particular to a medium. Fowler expresses this perspective in the following way: “Statements about a genre are statements about the genre at a particular stage—about Zn′ not Z. Concerning a genre of unspecified date, or within very wide chronological limits, correspondingly little can be said.”49 Noël Carroll criticizes a closely related sentiment in film studies, which “supposes that each medium has a unique nature and that with that nature goes an accompanying series of laws.”50 According to this school of thought, the integrity and cogency of any analysis of genre depends on limiting that genre to discrete coordinates of medium, place, and period.51

According to strong nominalists, we would only be able to make cogent claims about, say, a small number of comic films from a certain decade and within a certain studio system. This strong nominalist view is expressed in Fowler’s claim that we can only make statements “about Zn′ not Z.” The prime symbol in this formulation implies that an instance of a genre should be understood like a differentiated function that cannot be coidentical with another instance.52 Since we can’t say anything about “Z,” critics ought to confine themselves to specific coordinates in their analysis.

I agree that genres are not timeless universals, but this fact does not require us to structure genre criticism around overly narrow particularities. We would soon fall into an infinite regress of needing ever more granular and particularistic divisions between different iterations. My view is that we can instead approach genres as living models for experience and examine homologies across instances of these models. The Comedy of Computation is an extended exemplification of this approach. For instance, the titles of my chapters frame particular homologies through the comic idiom “Have you heard the one about . . . ?”53 The idiom is always an instance of a structure, a repeatable form that varies in content while also allowing for modes of collective identification.54 The television show Friends (1994–2004) draws on the same comic idiom as my chapter titles in naming its episodes: “The One with the Sonogram at the End,” “The One with the Thumb,” “The One with George Stephanopoulos.” In Friends, “The One with . . .” figures each episode as another instance in an ongoing conversation or comic exchange among the cast of characters. The titles invite the audience into these social attachments, as though the serialization of episodes were a formal way of letting the audience in on an inside joke. Only good friends would laugh at “The One with the Thumb,” much less know how to parse the title’s meaning. This use of the idiom creates a sense of intimacy from genericity, as if the reproducibility of generic form were also a token of idiosyncrasy.

The use of the idiom in Friends creates a version of what Lauren Berlant calls an “intimate public,” a paradoxical form of collective identity in which strangers share “emotional knowledge” about “a broadly common historical experience.”55 The idiom provides a generic form for the experience of the series, as though each episode were formulaic, perhaps even mechanical in its repetition of a structure. But the idiom also signifies affinity, the possibility of sharing “emotional knowledge” with strangers. The idiom thus captures a paradox about comedy in the age of technological reproduction: it traffics simultaneously in abstraction and affinity, the impersonal and the intimate, the generic and the idiosyncratic.

Being the instance of a comic idiom can deny or imperil identification—a central concern in my first two chapters. The first examines comic portrayals of robots, automatons, and other automated technologies from the eighteenth century to the present. I show that racialized tropes about labor and social identity are constitutive parts of these comic figures. The second chapter examines the generic as an aesthetic category central to the experience of becoming computational. I use the phrase being generic to refer to (a) social anxieties about the loss of distinctiveness associated with computational media but also (b) the promise of sharing terms of legibility with others—that is, of being classified as or recognizing discourse in terms of a genre. I argue that these competing registers of the generic operate as aesthetic touchstones in computational forms of sociality.

The second chapter’s claims about genericity introduce a problem I take up from a different perspective in the third chapter—namely, how the mediating work of computers and the commodification of experience by many tech corporations interact with what intellectual historians call an ethics of authenticity. Authenticity has a notoriously vague meaning. It can refer to empirical questions about whether an artifact is a forgery. It can also name a cultural and philosophical ideal—something akin to what Polonius advises in Hamlet (c. 1601): “To thine own self be true.” The empirical question and the moral ideal are both operative in the public life of computing technology. A muddled version of both surfaces in debates about the nature of consciousness in science fiction and AI research. Do AI systems have an inner life? Is their consciousness genuine?

The third chapter explores the origins of authenticity as a moral ideal and philosophical problem in the cultural history of computing. I show how the counterculture and youth movements were two important sources for the moral importance attributed to authenticity within the early cultures of computing. But I also look back to the much older development of an ethics of authenticity—a philosophical and cultural discourse that developed across several centuries—and its ongoing weight in cultural attempts to grapple with the disruptions of the postwar economy and modernity more generally. I show how these different sources of authenticity animate many satirical portrayals of the tech industry.

These first three chapters revolve around the racial, economic, and philosophical contradictions that arise as people, corporations, and social forces couple with computational technology. These contradictions also surface in the fourth chapter’s analysis of a staple figure in comedy: the couple. This figure has served as a surprising but consistent image for the public experience of computing. I have already mentioned William Marchant’s The Desk Set, which I analyze in the fourth chapter alongside its film adaptation, starring Katharine Hepburn and Spencer Tracy. In both versions, a computer called “Emmy” serves in a role analogous to the “obstructing characters” that Northrop Frye examines in his contemporaneous Anatomy of Criticism (1957). The computer begins as an obstruction to happiness and a harbinger of the obsolescence of knowledge workers, but Emmy eventually morphs into a partner in the couple’s union.

The computer’s role in the couple varies across literary and filmic culture, but there have also been some common patterns since the Second World War. To understand these patterns, I examine several popular films, including The Honeymoon Machine (1961) and Weird Science (1985), showing how the trope of coupling with computers revises long-standing ideas about comedy as a model of the good life. As the computer becomes part of the couple—or even as people couple with computers, as in films like Control Alt Delete (2008) and Her (2013)—the resulting union presents an image of a “pragmatically free society.”56 The fourth chapter considers the notions of freedom and flourishing contained within such images.

The final chapter explores how absurdist comparisons between humans and computers function in the cultures of science and technology. One form of this absurdism diminishes human thought and labor through hypothetical comparisons with computing machines, such as: It takes our computer X number of seconds to complete a certain task. If Y number of humans were to complete the same task, it would take this assembled group Z number of days/months/years. These comparisons appear often in press releases from scientific institutions and the tech industry. I argue that this rhetorical framing relies on what Jerry Palmer calls the “logic of the absurd,” and I show how novels like Olof Johannesson’s The Tale of the Big Computer (1968) and Ishmael Reed’s Yellow Back Radio Broke-Down (1969) narrativize this scalar absurdism and imagine its consequences for political organization. I describe this political imaginary as computopia, a realm in which computers accomplish an efficient and rational ordering of social life. Works like Johannesson’s and Reed’s novels imagine computopia as a means for considering whether democracy has become obsolete in an age of computation.

The potential obsolescence of democracy is central to the film referenced in my subtitle: Stanley Kubrick’s Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964). Kubrick’s subtitle was a playful allusion to the proliferation of self-help manuals in an era of atomic warfare. There was no central “I” in the film. Rather, it presented various personae characteristic of an organizational and technological moment. The irony of Kubrick’s subtitle thus provoked a kind of existential question: what does it mean to have a self—or even to care about the “I” and its worries—when nation-states have the technological capacity for global destruction? Kubrick’s film takes up this question in subtle and darkly comic ways, but one of its central concerns is to portray the feelings of absurdity that arise for many who occupy official roles within a fully rationalized system of national defense, which paradoxically enables—and, in fact, may even encourage—the entirely irrational destruction of the world. Institutional authorities in the film create the very conditions that undermine the human flourishing they claim to protect.

Dr. Strangelove considers this absurdity through the premise of a complicated military bureaucracy that manages the nuclear arsenal of the United States. A rogue Air Force general named Jack D. Ripper (Sterling Hayden) implements a military order called Plan R, a deterrent strategy in which a lower-level general can order a retaliatory nuclear strike if the standard chain of command has been disrupted. General Ripper initiates Plan R without cause or authorization, sending a fleet of bombers carrying nuclear warheads to attack dozens of targets in the USSR. He then closes all communications channels outside of his base. The orders cannot be rescinded because the planes have switched to a coded communications channel. Despite coordinated attempts by the US and USSR to shoot down the planes and break General Ripper’s code, one plane makes it through Soviet defenses and sets off the USSR’s own global deterrent system, the Doomsday Machine, which is designed to wipe out biological life on the surface of the planet.

Computers serve a key role in this system of mutually assured destruction. As Dr. Strangelove (Peter Sellers) explains to the US president, Merkin Muffley (also played by Sellers), the Doomsday Machine is connected to “a gigantic complex of computers” that activates the weapon under “a specific and clearly defined set of circumstances.” This “complex” places control of the weapon outside human operators, ostensibly to avoid enemy interference or moral reservations from USSR military personnel.57 In other words, even as Plan R relies on bureaucratic rationality to deter Soviet nuclear attacks, the Soviets’ own system of deterrence ensures the same outcome through computing. The safeguards of impersonal decision-making and computerized automation create the very war each was designed to avoid.

Kubrick’s film depicts political systems unsuited to the scale of devastation that can be wrought through their technological capabilities. Plows, writing, and gun powder are all technologies, but not until atomic warfare was it conceivable for technology to devastate the planet.58 Many argue that we have now created yet another technology capable of global destruction: artificial intelligence (AI).59 In an influential thought experiment, the philosopher Nick Bostrom imagines an AI that tries to achieve an otherwise laudable goal through means that pose an existential threat to humanity. In one version of this thought experiment, an AI has been tasked to solve a particularly difficult mathematical problem. It appropriates the Earth’s resources to build supercomputers to facilitate its calculations, thus devastating the global economy. Or, in a more comic version, an AI designed to manufacture paperclips realizes “it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips.”60 The AI’s programming wants paperclips. Can we guarantee that such technological wants are compatible with human flourishing? Are our institutions capable of managing the existential and societal threats made possible through computing technology?

The following pages consider a spectrum of answers to this line of thought, but there is not a lot of ambiguity in Kubrick’s film. After the Doomsday Machine has been activated, the president and Dr. Strangelove discuss a plan for a few hundred thousand Americans to survive in mine shafts. President Muffley says, “Well, I would hate to have to decide who stays up and who goes down.” Strangelove answers, “Well that would not be necessary, Mr. President. It could easily be accomplished with a computer. And a computer could be set and programmed to accept factors from youth, health, sexual fertility, intelligence, and a cross section of necessary skills.” Strangelove’s arm then rises in a Sieg Heil salute, a gesture that the former Nazi scientist has been suppressing for most of the film.

It is of course ironic that he fails to suppress this salute after proposing a rational plan for solving the unintended consequences of another rational plan. The implication is that Strangelove’s unflinching faith in technocratic objectivity is the mirror image of the fascist’s absolute faith in a single political personality. The computer, in this view, becomes a machine for facilitating what Hannah Arendt famously calls the banality of evil, the ordinary pencil-pushing bureaucracy that facilitates atrocity in the modern era.61 A technology ostensibly promising a better future turns out to betray the very possibility of a future.

I make no prognostications in this book. I am not predicting we will all be exterminated for the sake of paper clips or sorted into mine shafts after a computationally orchestrated apocalypse. Instead, I try to understand the kinds of sociality imagined in these and many other comic images. I also hope to show why we ought to take seriously cultural work that depicts the competing wants and conflicting social imperatives involved in the experience of becoming computational. The coupling of computers with social life not only produces novel forms of attachment—new ways of imagining life together—but also ambivalence, disaffection, uncertainty, and oddly pleasurable forms of being unsocial. These incongruities are not design flaws in the technology, errors that better engineering can address. The genre of experience examined in this book instead reveals that the disjunctions and contradictions that appear throughout the computer’s cultural history are in fact constitutive features of becoming computational. As the classic comic verb accommodate implies, this genre attempts to make habitable the conflicts of this lived experience. Of course, there may soon be breakthroughs in areas like quantum computing that shatter our assumptions about what counts as computational. I will show how the social experience of becoming computational orbits this always-unfinished speculative possibility: that more and better is still to come, that ever-closer intimacies will soon be made available, and that the forms of experience from the past and present will become increasingly and inextricably obsolete.

Notes

1. For a discussion of Apple’s role in the rise of the personal computer, see Laine Nooney, The Apple II Age: How the Computer Became Personal (Chicago: University of Chicago Press, 2023).

2. There is a long history of associating comedy and the ordinary. Aristotle describes comedy in terms of the lowly or common, while Mikhail Bakhtin shows how ordinary people have employed the comic to deflate the pomp and pretense of the ruling classes. To be sure, these theories are not always coherent or consistent. Aristotle claims comedies are about “base people” (Poetics, 1448b24–26), yet one of his few extant examples involves Orestes becoming friends with Aegisthes (1453a35–39). Neither character is an obvious example of baseness. Regarding the deflation of authority, see Mikhail Bakhtin, Rabelais and His World, trans. Hélène Iswolsky (Bloomington: Indiana University Press, 1984), 1–58. In contrast, other critics argue that comedies are a mark of high culture. The playwright George Meredith says that a “society of cultivated men and women is required” for a “great comic poet.” George Meredith, “An Essay on Comedy,” in Comedy, ed. Wylie Sypher (Baltimore: Johns Hopkins University Press, 1956), 3.

3. For an overview of historical associations between comedy and sociality, see Jan Walsh Hokenson, The Idea of Comedy: History, Theory, Critique (Vancouver, BC: Fairleigh Dickinson University Press, 2006), 42–63. It is common to read dramatic comedy as a social form, but since Henri Bergson’s work (which I examine in detail in the first chapter), humor has similarly come to be associated with sociality. As Mary Douglas puts it, the “joke form” cannot be understood “in the utterance alone” but only “in the total social situation.” Mary Douglas, Implicit Meanings: Essays in Anthropology (London: Routledge & Paul, 1975), 93.

4. “One Problem the Computers Can’t Solve,” Forbes, Jan. 1, 1958, 83.

5. Susanne K. Langer, Feeling and Form: A Theory of Art (New York: Scribner’s, 1953), 346; Northrop Frye, Anatomy of Criticism: Four Essays (Princeton, NJ: Princeton University Press, 1957), 163.

6. Martin Campbell-Kelly and William Aspray, Computer: A History of the Information Machine (New York: Basic Books, 1996), 106, 111–13.

7. Historians typically associate the “computer revolution” with the mass production of microprocessors that dramatically lowered costs for purchasing the technology during the mid-1980s. For a summary of this perspective, see Daniel E. Sichel, The Computer Revolution: An Economic Perspective (Washington, DC: Brookings Institution Press, 2001). Others have questioned whether “revolution” is the proper framing for these major changes, arguing that modern civilizations have developed a variety of techniques for managing vast amounts of information prior to the computer. For this perspective, see Douglas S. Robertson, “The Information Revolution,” Communication Research 17, no. 2 (1990): 235–54.

8. Henry Slesar, “Examination Day,” Playboy, Feb. 1958, reprinted in Inside Information: Computers in Fiction, ed. Abbe Mowshowitz (Reading, MA: Addison-Wesley, 1977), 96–97.

9. Slesar, 97.

10. Lee Edelman, No Future: Queer Theory and the Death Drive (Durham, NC: Duke University Press, 2004), 12.

11. Edelman, 12.

12. Edelman, 26.

13. Edelman, 70, 82.

14. HAL has been variously described as “oddly asexual,” “equivocally gendered,” and even “queer.” See Susan White, “Kubrick’s Obscene Shadows,” in Stanley Kubrick’s “2001: A Space Odyssey”: New Essays, ed. Robert Kolker (New York: Oxford University Press, 2006), 165; Stephanie Schwam, The Making of 2001: A Space Odyssey (New York: Modern Library, 2000), 172; and Michel Ciment, Kubrick (New York: Holt, Rinehart, and Winston, 1983), 134.

15. Slavoj Žižek, “From Virtual Reality to the Virtualization of Reality,” in Electronic Culture: Technology and Visual Representation, ed. Tim Druckrey (New York: Aperture, 1996), 294.

16. John Hersey, The Child Buyer (1960; New York: Vintage, 1989), 250. Hereafter cited parenthetically.

17. Stanley Ballinger, “Significant Questions, Inadequate Answers: A Review-Essay on Hersey’s The Child Buyer,” Phi Delta Kappan 42, no. 3 (Dec. 1960): 129.

18. Greg Benford, “The Scarred Man,” Venture Science Fiction Magazine, May 1970, 125.

19. This angst about automation predates the computer. For an overview of this anxiety, which often goes by the name “Luddism,” see Matt Tierney, Dismantlings: Words against Machines in the American Long Seventies (Ithaca, NY: Cornell University Press, 2019), 29–47.

20. Benford, “The Scarred Man,” 125.

21. Barbara Ehrenreich, Fear of Falling: The Inner Life of the Middle Class (New York: HarperPerennial, 1990), 15, 38.

22. As far as I know, the first appearance of the term “knowledge work” is Peter Drucker, The Landmarks of Tomorrow (1959), 69. Drucker does not offer a definition of the term, and it is easy to imagine how class status generates the term. After all, the common notion of knowledge workers as people who “think for a living” implies that manual laborers don’t think. For this reason, I use phrases like “professional-managerial class” and “knowledge work” with trepidation, even though I cannot at present find a better alternative vocabulary for the class and labor dynamics at play.

23. Andrew Stott, Comedy (London: Routledge, 2005), 136–37.

24. For a history of universal computation projects, see Jeffrey M. Binder, Language and the Rise of the Algorithm (Chicago: University of Chicago Press, 2022).

25. Kathleen Fitzpatrick, The Anxiety of Obsolescence: The American Novel in the Age of Television (Nashville, TN: Vanderbilt University Press, 2006), 27.

26. Kathleen Fitzpatrick, “Obsolescence,” PMLA 123, no. 3 (May 2008): 718.

27. Felicia Lamport, “A Sigh for Cybernetics,” Harper’s Magazine, Jan. 1961, 57.

28. Linda Hutcheon, Irony’s Edge: The Theory and Politics of Irony (London: Routledge, 1994), 9–34.

29. Adrian Mackenzie, “Undecidability: The History and Time of the Universal Turing Machine,” Configurations 1, no. 3 (Fall 1996): 363. For other uses of the phrase, see N. Katherine Hayles, Postprint: Books and Becoming Computational (New York: Columbia University Press, 2021); Jennifer Gabrys, “Sensors Experiencing Environments, Environments Becoming Computational,” Dialogues in Human Geography 9, no. 1 (2019): 121–24; Adrian Mackenzie, “A Troubled Materiality: Masculinism and Computation,” Discourse 18, no. 3 (Spring 1996): 89–111. N. Katherine Hayles examines other dimensions of becoming computational in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999), a foundational study of how digital information technology has shaped public culture. In Postprint, Hayles takes the digital transformation of book culture as “one aspect of a much larger picture: the becoming computational of humans and, indeed, of the entire planet” (15).

30. Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life (Cambridge, MA: MIT Press, 2011), 58.

31. Kitchin and Dodge, 60.

32. Kitchin and Dodge, 60.

33. Jacqueline Wernimont makes a closely related argument about “quantum media” like pedometers, mortality tables, and census records, all of which predate the computer. For Wernimont, these quantum media “prioritize profit, oversight, and control,” figuring some privileged sections of the population as “persons valuable to the state, or after the twentieth century, as valuable to corporations and ‘human knowledge.’ Throughout the same time, nonwhite people have been refigured by quantum media as property, depersonalized data sets to be used as ‘resources’ or liabilities rather than as people.” Jacqueline Wernimont, Numbered Lives: Life and Death in Quantum Media (Cambridge, MA: MIT Press, 2018), 161.

One practical consequence of this transformation is that digital-purchase methods participate in a system of credit that affects everything from an individual’s ability to buy a car to a bank’s willingness to finance a business in a community. For more on the ambiguities of credit worthiness, see Annie McClanahan, “Bad Credit: The Character of Credit Scoring,” Representations 126 (Spring 2014): 31–57. For other ways in which algorithmic aggregations affect the social, see Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018), 64–109; R. Joshua Scannell, “This Is Not Minority Report: Predictive Policing and Population Racism,” in Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life, ed. Ruha Benjamin (Durham, NC: Duke University Press, 2019), 107–29. Yet it was also the case that many scholars and artists used quantitative technologies to critique structures of racial oppression and offer alternative accounts of social life, as Autumn Womack argues in The Matter of Black Living: The Aesthetic Experiment of Racial Data, 1880–1930 (Chicago: University of Chicago Press, 2022).

34. The policy of “One Laptop per Child” embodies this sentiment. For a history and analysis of this educational policy, see Morgan G. Ames, The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child (Cambridge, MA: MIT Press, 2019).

35. The phrase first appears in Seymour Papert, Mindstorms: Children, Computers, and Powerful Ideas (New York: Basic Books, 1980). A mathematician with advanced training in both philosophy and psychology, Papert laments how many educational uses of computers have failed to “integrate computational thinking into everyday life” (182).

36. Jeannette M. Wing, “Computational Thinking,” Communications of the ACM 49, no. 3 (March 2006): 33–34.

37. Jeannette M. Wing, “Computational Thinking and Thinking about Computing,” Philosophical Transactions of the Royal Society A 366 (2008): 3719.

38. Mackenzie, “Undecidability,” 363.

39. Sigmund Freud, “Constructions in Analysis,” in The Standard Edition of the Complete Psychological Works of Sigmund Freud 23, ed. James Strachey (London: Hogarth, 1964), 268.

40. Other scholars have also noted how the experience of becoming computational affects the intimate and emotional registers of everyday life. Kris Cohen argues that computational media position users between publics of affiliation (intimacy) and populations constructed through massive data processing (abstraction). The user feels part of both relationalities at once, as though they are simultaneously connected and alone. Kris Cohen, Never Alone, Except for Now (Durham, NC: Duke University Press, 2017), 29–40. Rebecca B. Clark shows how the cultural functions assigned to data produce reactions like disgust. These reactions suggest that our relationship to massive data systems is simultaneously abstract and saturated in affective intensity. Rebecca B. Clark, American Graphic: Disgust and Data in Contemporary Literature (Stanford, CA: Stanford University Press, 2023). Wendy Hui Kyong Chun argues that a feeling of the “habitual” characterizes the embeddedness of new media in social life. Users often relate to technology through habit, picking up a smartphone immediately after waking in the morning or using geolocation services to map a relatively familiar route. Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media (Cambridge, MA: MIT Press, 2017), 7. Zara Dinnen similarly uses the phrase “the digital banal” to describe how computational technologies lead to an effacement of “the affective stakes of life determined by algorithms and life at the edge of the earth’s resources.” Zara Dinnen, The Digital Banal: New Media and American Literature and Culture (New York: Columbia University Press, 2021), 2. The design and ubiquity of technology leads us to forget their novelty.

41. Part of what I am trying to show in this book is that such a view of experience derives from a long-standing social philosophy that some intellectual historians call an ethics of authenticity. It would be confusing to pursue this point at this juncture of my introduction. While we’re here among the endnotes, though, I would like to make a related point by citing Yves Citton’s argument that the confused social relations created by digital technology dispel “our romantic addiction to a heroic, unrealistic, and self-illusory model of personal agency.” Yves Citton, “Fictional Attachments and Literary Weavings in the Anthropocene,” New Literary History 47, nos. 2 & 3 (Spring & Summer 2016): 311. In other words, the process of becoming computational undercuts a philosophical view of the self as the source of action—a fantasy about which I feel some ambivalence despite Citton’s forceful critique. This confusion or compromising of personal agency will return as a central problem for an ethics of authenticity in the third chapter.

42. My approach is indebted to Raymond Williams’s much-discussed phrase “structures of feeling” in Marxism and Literature (Oxford: Oxford University Press, 1977), 128–35; and Lauren Berlant’s discussion of genre in The Female Complaint: The Unfinished Business of Sentimentality in American Culture (Durham, NC: Duke University Press, 2008). I engage with Berlant’s ideas at greater length below, but they, too, investigate genre as a term that elucidates the character of contemporary experience.

43. I cite many of these scholars in subsequent pages, but I would also like to acknowledge a few other works that have influenced my thinking: David Fishelov, Metaphors of Genre: The Role of Analogies in Genre Theory (University Park: Pennsylvania State University Press, 1993); Wai Chee Dimock, “Genre as World System: Epic and Novel on Four Continents,” Narrative 14, no. 1 (Jan. 2006): 85–101; Noël Carroll, The Philosophy of Motion Pictures (Malden, MA: Blackwell, 2008); John McGowan, Pragmatist Politics: Making the Case for Liberal Democracy (Minneapolis: University of Minnesota Press, 2012); and Kenneth W. Warren, “The Persistence of Genre,” Modern Language Quarterly 81, no. 4 (Dec. 2020): 567–77.

44. For a modern example of this view, see George Steiner, “‘Tragedy,’ Reconsidered,” in Rethinking Tragedy, ed. Rita Felski (Baltimore: Johns Hopkins University Press, 2008), 29–44.

45. Jacques Derrida, “The Law of Genre,” trans. Avital Ronell, Critical Inquiry 7, no. 1 (Autumn 1980): 57. For an excellent appraisal of Derrida’s view, see John Frow, “‘Reproducibles, Rubrics, and Everything You Need’: Genre Theory Today,” PMLA 122, no. 5 (Oct. 2007): 1627–28.

46. Frye, Anatomy of Criticism, 163–85. Sigmund Freud presents his theory in Jokes and Their Relation to the Unconscious (1905).

47. Alexander Leggatt, English Stage Comedy, 1490–1990: Five Centuries of a Genre (London: Routledge, 2002), 1. In classics, a similar point is often made about the generationally different plays performed in the same festival context. As Michael Silk puts it, “Aristophanes’ Old Comedy and Menander’s New are too distinct (their repertoires are too different) to be identified as ‘the same’ genre.” Michael Silk, “The Greek Dramatic Genres: Theoretical Perspectives,” in Greek Comedy and the Discourse of Genres, ed. Emmanuela Bakola, Lucia Prauscello, and Mario Telò (Cambridge: Cambridge University Press, 2013), 24.

48. I use the phrase “genre performative” to describe this view in Benjamin Mangrum, “Tragedy, Realism, Skepticism,” Genre 51, no. 3 (Dec. 2018): 209–36. Since the 1960s, many others have used Ludwig Wittgenstein’s idea of “family resemblances” to articulate closely related theories of genre. See, e.g., Hjalmar Wennerberg, “The Concept of Family Resemblance in Wittgenstein’s Later Philosophy,” Theoria 33 (1967): 107–32; Alastair Fowler, Kinds of Literature: An Introduction to the Theory of Genres and Modes (Cambridge, MA: Harvard University Press, 1982), 40–43; David Fishelov, “Genre Theory and Family Resemblance—Revisited,” Poetics 20 (1991): 123–38; Marah Gubar, “On Not Defining Children’s Literature,” PMLA 126, no. 1 (2011): 209–16; and John Frow, Genre, 2nd ed. (London: Routledge, 2015), 59.

49. Fowler, Kinds of Literature, 47.

50. Noël Carroll, Engaging the Moving Image (New Haven, CT: Yale University Press, 2003), 4.

51. Many other scholars have resisted arguments about medium specificity. See Carroll, Engaging the Moving Image, 1–9; Kamilla Elliott, “Rethinking Formal-Cultural and Textual-Contextual Divides in Adaptation Studies,” Literature/Film Quarterly 42, no. 4 (2014): 576–93; and Justus Nieland, Happiness by Design: Modernism and Media in the Eames Era (Minneapolis: University of Minnesota Press, 2020), 1–38.

52. Fowler’s formulation exemplifies several aspects of my disagreement with strong nominalist approaches to genre, but they are not central to clarifying my own point, so I will levy those criticisms in this endnote. The first is that his use of the prime symbol creates more confusion than it solves. In physics, prime designates variables after an event. In mathematics, the symbol designates a variable related to or derived from some other point. In other words, there cannot be Zn′ without some establishing event or point of reference. Fowler’s metaphor presupposes the integrity of a fixed entity called Comedy out of which subsequent instances derive—the very idea he is rejecting. Perhaps the metaphor would fit with a theory of generic meaning as a kind of network, but the messiness of network theory would be at odds with Fowler’s attempts to make genre criticism more historically bounded. Indeed, if we were to follow Fowler’s recommendation and only make claims about Zn′, we would need to treat particular comedies like self-contained wholes with fixed coordinates. Such an implausible view registers my second disagreement. Strong nominalism narrows and focuses and limits until all we have are fetishized particularities. Restricting criticism to statements about “genre at a particular stage” becomes just another “fixed historical kind,” the older theory of genre that Fowler contrasts with his own approach (Fowler, Kinds of Literature, 37).

53. See, e.g., Simon Critchley, “Did You Hear the One about the Philosopher Writing a Book on Humour?” Richmond Journal of Philosophy 2 (Autumn 2002): 1–6, https://web.archive.org/web/20160329171824/http://www.richmond-philosophy.net/rjp/back_issues/rjp2_critchley.pdf.

54. My thinking here is indebted to Lauren Berlant’s view of genre as “an aesthetic structure of affective expectation, an institution or formation that absorbs all kinds of small variations or modifications while promising that the persons transacting with it will experience the pleasure of encountering what they expected, with details varying the theme” (Berlant, The Female Complaint, 4).

55. Berlant, viii.

56. Frye, Anatomy of Criticism, 169.

57. This “strike-and-response sequence” would become a common trope in postwar science fiction. See David Seed, “The Brave New World of Computing in Post-War American Science Fiction,” in American Mythologies: New Essays on Contemporary Literature, ed. William Blazek and Michael K. Glenday (Liverpool, UK: Liverpool University Press, 2005), 180.

58. For a study of the cultural and political imaginary that follows from this newfound scale of global destruction, see Rey Chow, The Age of the World Target: Self-Referentiality in War, Theory, and Comparative Work (Durham, NC: Duke University Press, 2006).

59. See, e.g., Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, March 30, 2023; and Jacob Stern, “AI Is Like . . . Nuclear Weapons?” The Atlantic, March 26, 2023.

60. Kathleen Miles, “Artificial Intelligence May Doom the Human Race within a Century, Oxford Professor Says,” Huffington Post, Feb. 4, 2015, https://www.huffpost.com/entry/artificial-intelligence-oxford_n_5689858. Bostrom’s original formulation of this thought experiment may be found in Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive, and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence 2, ed. Iva Smit et al. (Ontario: Institute of Advanced Studies in Systems Research and Cybernetics, 2003), 12–17.

61. Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil (New York: Viking, 1963).

Back to Excerpts + more