CLOSE READING, COMPUTERS, AND CLOUD ATLAS
> Reading literature with the aid of computational techniques is controversial. For some, despite the fact that almost all publishing and book dissemination in the twenty-first century depends on computational technology, digital approaches fetishize the curation of textual archives and are neoliberal in their pursuit of Silicon Valley–esque software-tool production.1 For others, digitally amplifying reading-labor power might fulfill the notion of systematizing dreams advanced by early twentieth-century Russian formalism. For proponents this yields new, “distant” ways in which we can consider textual pattern-making.2 For detractors there remain worthwhile questions about the quantifying processes of the digital humanities: should the humanities in reality always be qualitative in their approaches?3 At the same time, though, the idea that the humanities hold a monopoly on aesthetics and its study is debatable. Mathematics, statistics, and computation certainly have a beauty and an intuition behind them, and they have also given us formulae, such as the “golden ratio,” that add to our understanding of the intersections of aesthetics, nature, and perception.4
Despite the hostility from some quarters of literary criticism to computational methods, however, English studies has long been accustomed to using quantitative evidence in its reasoning; quantitative approaches are actually nothing new in the humanities. For just one example, consider that Dartmouth College offered a course entitled “Literary Analysis by Computer” as far back as 1969.5 As Nicholas Dames has pointed out, Vernon Lee proposed a “statistical experiment”—a quantitative analysis—on literature in her 1923 The Handling of Words, itself prompted by a letter to The Times (London) from Emil Reich several years earlier.6 Quantifications, repetition, and frequency are core components within the study of aesthetics, from Virgil’s Aeneid to the present day.7 While counting words is, alone, neither enough to denote linguistic significance nor sufficient to tell us much about literary sensibility, as some critics have forcefully argued, we are far more acclimatized to contextualized quantitative evidence than we might initially admit.8 Certainly, if the use of computers to study literature contains within it a quantifying urge, it is not an urge that has been foisted on us solely by computers.
The usual way in which most scholars using computational methods in literary studies implicitly think of their practice is as akin to a telescope. “We have,” it is pronounced, “these new tools, these telescope-like things that allow us to see many more texts than was possible before, just like the telescope allowed Galileo to see many more stars.”9 The methods are claimed to permit us, at a distance, to ingest, process, and perhaps understand texts within grand perspectives.10 Literary history, we are told, can be seen unfolding over vast time periods, and we simply do not have the time in our lives to read that many novels.11 This grand perspective is a noble goal, and scholars such as Stephen Ramsay and Ted Underwood (among many others too numerous to mention) have pointed both to the problems that such methods are supposed to assist with solving and the broad-scale study of, say, genre that becomes possible under such paradigms.12 In other words, in such methods the computer becomes a tool that can “read” on our behalf. This is not “reading” as humans perform it. It is instead a mode under which we delegate repetitive labor to the machine and then expend our interpretative efforts on the resultant quantitative dataset. It is an environment in which we can “think along” with machines.13 For, as Lisa Gitelman and others have rightly told us, there is no such thing as “raw data,” and hermeneutics remain core.14 Such methods are like a telescope, though, because, while we can see further, we also lose the resolution of close focus and must interpret the results. For some, such as Wai Chee Dimock, “the loss of the detail” in such activities “is almost always unwarranted” and can lead us only to an “overcommitment to general laws, to global postulates operating at some remove from the phenomenal world of particular texts.”15
These computational practices must be situated within a universal, but often unspoken, bounding of mortality. Indeed, the reason for their development is that death cuts short every totalizing attempt to read everything. This is usually framed in the gentler terms of there being “too much to read within a human lifespan” and has led to various articulations of “critical not-reading,” as Amy Hungerford’s feminist take on “not reading David Foster Wallace” would have it.16 For Hungerford, life is too short to read the (admittedly enormous) literary output of a man whose personal life seems saturated in misogyny.
Distant reading, then—and its related forms of cultural analytics, algorithmic criticism, various modeling techniques, and “writing machines”—is concerned with reductive but nonetheless labor-saving methods that use the untiring repeatability of computational tasks to garner statistically informed deductions about novels or other works that one has not read.17 Predictably, this horrifies many who work in literary studies departments. But it is part of an acknowledgment of the fact that, for many years now, more contemporary fiction has been published every year than it is possible for a single person to read in a lifetime. (In 2015, according to Bowker data, almost three million new books were printed in English alone, of which 220,000 were novels. A good estimate for the number of days in a human lifespan is twenty-six thousand [approximately seventy-one years], using the World Health Organization’s figures as of 2015, so one would need to read an average of ten novels per day, every day from age ten onward, to read all English fiction published in 2015.)18 Again, reading avoidance is nothing new: “not reading,” writes Lisa Marie Rhody, “is the dirty open secret of all literary critics.”19
In one sense, then, telescopic distant reading is an antinecrotic practice, one that staves off the limiting effects of death. But it is also an antireading practice that substitutes for direct, human engagement with literature—at least, that is, once the methods and models have been developed.20 It is nonetheless true and it should not be overlooked, as Richard Jean So notes, that the benefit of an “iterative [digital literary-modeling] process is that it pivots between distant and close reading. One can only understand error in a model by analyzing closely the specific texts that induce error; close reading here is inseparable from recursively improving one’s model.”21 In this respect, distant and close reading practices perhaps diverge less than detractors sometimes imagine. That said, and put otherwise, there remains a death-avoidance-to-reading-avoidance trade-off ratio implicit beneath most broad-scale digital literary work. These techniques of scaling the wall of the “great unread” of literary history give us more labor power (an artificial life extension) at the expense of a sort of alienation from the literary text as traditionally conceived by literary studies (“not reading”).22 Perhaps, though, this underpinning limiting mortality is why so many critiques of digital humanities have framed it in terms of the “death” of traditional disciplinary practices.
CLOSE READING—WITH COMPUTERS
The processes of iteration, repetition, and quantitative analysis that are made possible by computational methods have an analogy not just in the telescope but also in another optical instrument: the microscope.23 While both of these tools yield powers of amplification, it is the level of the minute, the unseen, that can be brought to vision beneath the microscope—a kind of newly angled hybrid text, as Geoffrey Rockwell has it, refocused under fresh optics.24 For though Barbara Herrnstein Smith has objected to comparing traditional close reading to a microscope, there are textual elements that are too difficult in their minute scope for people to detect within novels without computational assistance.25 What can the computer see, in its repetitive and unwavering attention, that was less (or even in-) visible to me as a human reader? What evidence might we gather for our understanding of texts at the close level through similar methods? Might such an effort rebalance the necroreading ratio and bring us back to the text?
Close reading, however, has come under fire in certain digital humanities circles.26 For instance, it has been claimed that “if you want to look beyond the canon, close reading will not do it”; instead, what is sought is a “formalism without close reading.”27 In the new world of knowledge that such figures desire, knowledge “cannot mean the very close reading of very few texts,” even while the definition of “distant reading” includes units that are “much larger” but also, crucially, “much smaller” than the novel.28 Close reading has become, for a group of critical scholars, a form of theology that invests too heavily in the sacrosanct nature of a few texts, a fact that is not surprising given the historical links between, and cothinking about, literary and religious canons.29 Shawna Ross provides an astute recapitulation of the various prominent digital humanities figures who have thought of computational techniques as opposed to close reading.30 Lev Manovich, for instance, posits that “database and narrative are natural enemies,” implying that “each claims an exclusive right to make meaning out of the world.”31 As another example, data that have been through machine learning processes are, for Rafael Alvarado and Paul Humphreys, possessed of “a representational opacity” that requires a second-order interpretative paradigm to be grafted on top, moving us ever further away from close attention to the object itself.32 Finally, Matthew Wilkens also sees digital methods—albeit referring to specific types of geographic information systems (GIS)—as existing in tension with textual attention. If we deploy these methods, he claims, “we’ll almost certainly become worse close readers.”33
What does it mean, though, to be a good, bad, better, or worse close reader? What, for that matter, is “close reading”? As Peter Middleton notes, the phrase “close reading” refers to “a heterogeneous and largely unorganized set of practices and assumptions.”34 Indeed, just as “different versions of distant reading” are not really a “singular project,” in Andrew Goldstone’s words, there is no singular method that constitutes close reading.35 Nevertheless, to many in the field of literary studies this question of what we mean by “close reading” might seem so obvious as to need no answer. We are used, in the present moment, to paying close attention to the language of writers and to using the fruits of this practice to make arguments. As Jonathan Culler puts it, “the practice of close reading, of examining closely the language of a literary work or a section of it, has been something we take for granted, as a sine qua non of literary study.”36 This was not always so. In Jessica Pressman’s recent assessment, mirrored by others, close reading only “became a central activity of literary criticism” in the “modernist” period.37 That said, although the discipline of “English language and literature” is relatively young, being founded in 1828 at University College London, it can feel surprising, from our contemporary vantage point, that it took until the modernist period for close reading to develop.38
Nonetheless, the Arnoldian conception of literary studies and belles lettres, or even the discipline’s forebears in literary history and philology, gave way in the early twentieth century to the formalist New Criticism, pioneered by I. A. Richards. In The Principles of Literary Criticism (1924), before his influential Practical Criticism (1929), Richards introduced the notion that “unpredictable and miraculous differences” might come about “in the total responses” to a text from “slight changes in the arrangement of stimuli,” and he noted that these are, therefore, worthy of study.39 In another work, How to Read a Page (1942), Richards contrasts the biographer with the reader, the latter of whom is “not concerned with what as historical fact was going on in the author’s mind when he penned the sentence, but with what the words . . . may mean.”40 Of note in Richards’s turn to language and away from the authorial persona is the assertion that such an approach would allow the reader to go “deeper.”
The spatial relationship between the metaphors of closeness and deepness, of proximity and profundity, in reading practices has never been entirely clear but has certainly been a subject of debate. As Nancy Armstrong and Warren Montag note, even the canonical figures of the digital field “won’t let us construe the distance implied by distant reading in opposition to the closeness and polysemy of literary language.”41 Indeed, most post-1965 approaches to literature that posit a textual politics conceive implicitly of works of literature as ideological by-products of their time through a specific type of “knowledge effect.” In a basic Marxist framework this claim to social binding is that the superstructure of art is conditioned by the economic base and, to a lesser extent, vice versa. But it is the Althusserian epistemology, as set out in Reading Capital (1965), that most strongly underpins contemporary ideas of “critical reading” or “literary critique” based on “close” and “deep” reading.42 By examining textual presuppositions, it becomes possible, Louis Althusser claims, to see what a text cannot say as a condition of its ideological positioning within its own time. In this way, and although only an explicit articulation of a set of practices that had been building for some time, “symptomatic reading” was born—a mode of reading that conceives of texts as ideological artifacts with spoken and unspoken components—“sights and oversights”—that can be read critically and reflexively.43 That is, texts exhibit symptoms—usually contradictions or conceptual difficulties—of the unspoken ideological environment in which they were written; these symptoms are the “absence of a concept behind a word,” and they became the excavation site of most critical, nonsociological methodologies in literary studies.44 As these two metaphors of space put it—a concept behind a word and a site of buried interpretative treasure to be dug up—symptomatic, critical reading poses a text-behind-the-text, a presupposition of “the existence of two texts” with a “different text present as a necessary absence in the first.”45 This epistemology, in other words, is one in which the effect of producing knowledge is conditioned by structures of ideology and empiricism, which can be detected below the surface of any writing—that is, at depth.46 Such a reading method is core to critique, since it allows for the claim that texts might betray themselves and speak at depth in ways that are contrary to their surface readings.
Yet the seams of deep, close, symptomatic reading have begun to fray. Almost thirty years ago, Stewart Palmer asked what it might mean to perform “a critique of these critiques,” and almost two decades later, Cathy N. Davidson and David Theo Goldberg suggested that it was time that we “critiqued the mantra of critique.”47 Five years after that, Stephen Best and Sharon Marcus pointed out that the politics of political reading are somewhat tenuous. For although it has “become common for literary scholars” in symptomatic traditions, they write, “to equate their work with political activism, the disasters and triumphs of the last decade have shown that literary criticism alone is not sufficient to effect change.”48 Likewise, N. Katherine Hayles has more recently noted that “after more than two decades of symptomatic reading . . . many scholars are not finding it a productive practice, perhaps because (like many deconstructive readings) its results have begun to seem formulaic.”49
At the logical extreme of this growing suspicion of critique sits Rita Felski’s 2015 tract The Limits of Critique, although this work has not received a universally warm welcome.50 Felski’s book places Althusserian symptomatic reading under the primacy of Paul Ricoeur’s phrase, the “hermeneutics of suspicion”—another term that implies a detective-like aspect in which hidden, deep, and unsuspected layers of truth are to be made manifest. This phrase is most commonly but erroneously traced to Ricoeur’s work on Freud, Marx, and Nietzsche, first published in French in 1965, five years after Reading Capital (but misdated by Felski to 1950).51 Felski correctly acknowledges, however, that the phrase does not come from the Freud and Philosophy book, noting that “Ricoeur came up with the term at a later date while reflecting on the trajectory of his own work.”52 More specifically, as traced by Alison Scott-Baumann, the first use of this terminology is in Ricoeur’s preface to Don Ihde’s Hermeneutic Phenomenology: The Philosophy of Paul Ricoeur, in 1971.53 The phrase also subsequently appears in “Biblical Hermeneutics” (1975) and The Rule of Metaphor (1975/1977). By 1982, however, Ricoeur had abandoned the term, referring to “what [he] called in the past ‘the hermeneutics of suspicion.’”54 There is, notes Scott-Baumann, “more activity outside Ricoeur’s texts on the use of this term than within his texts.”55
Despite the fact, then, that he really refers to a “school of suspicion” at one stage—rather than a “hermeneutics of suspicion” for any concerted period (and, in fact, abandons the phrase)—what is most apt about these Ricoeurian words, for Felski, is that the “phrase throws fresh light on a diverse range of practices that are often grouped under the rubric of critique: symptomatic reading, ideology critique, Foucauldian historicism, various techniques of scanning texts for signs of transgression or resistance.”56 It is an attitude to close reading that combines “vigilance, detachment, and wariness (suspicion) with identifiable conventions of commentary (hermeneutics)—allowing us to see that critique is as much a matter of affect and rhetoric as of philosophy or politics.”57 The phrase “hermeneutics of suspicion” is a profitable description, Felski suggests, more as “a stimulus to thought” about contemporary close-reading practices than as a fully historicized phase within Ricoeur’s phenomenology.58
Amid Eve Kosofsky Sedgwick’s “reparative reading,” Best and Marcus’s “surface reading,” Althusserian “symptomatic” approaches, and Ricoeur’s “hermeneutics/schools of suspicion,” it is clear that the phrase “close reading” carries with it a variety of orientations to depth and surface that are independent of closeness to language. As Culler puts it, however, even though “close reading need not involve detailed interpretation of literary passages (though there is plenty of that around in close reading, especially when the texts in question are difficult to understand),” it is about “attention to how meaning is produced or conveyed, to what sorts of literary and rhetorical strategies and techniques are deployed to achieve what the reader takes to be the effects of the work or passage.”59 Close reading seeks, in most cases, to press linguistic detail in the services of literary argument and interpretation.
Given the popular academic impression of digital and quantitative approaches to literature as concerned, then, with distance and scale, some readers might be surprised to hear that this question of close textual analysis has certainly occurred to many others in the digital space, although it is a less common way of operating. I do not in any way propose it as a novelty even while I aim here to invite a broader audience to the table.60 It is not quite true, as Dimock puts it, that “unlike close reading, distant reading is meant to track [only] large-scale developments; it is not meant to capture the fine print.”61 For instance, as far back as 1987, John Burrows examined the novels of Jane Austen, in detail, through experimental quantitative methods.62 In 1990 Eviatar Zerubavel graphed the “percentage of emotional content” in French poetry among poets born from 1790 until 1909 in twenty-year intervals, and Mark Olsen outlined the potential transformations that such studies could have (bridging the gap between the very specific/close in poetry and the broader/more general history).63 Catherine Nicholson even traces the tension between the specific and close and the broad and general in reading practices as far back as 1598.64 On the one hand, the esteemed journal Literary and Linguistic Computing (recently renamed Digital Scholarship in the Humanities) has featured, over the past three years, at least two papers that examine single texts in detail. On the other hand, the Journal of Digital Humanities has had none since 2015.65 Other scholars, such as Miyuki Yamada, Yuichi Murai, and Ichiro Kumagai, have further focused on the visualization of linguistic features of single texts, in their case Charles Dickens’s A Christmas Carol (1843).66 The London-based artist Stefanie Posavec also undertook a detailed visualization exercise with Jack Kerouac’s On the Road (1957) in 2008, and the Novel Views project examined Les Misérables.67 The list of close-yet-distant reading practices goes on. Notably, when such methods are used, they are usually framed as “textual analysis,” which most often bears only a slight relationship to textual scholarship or traditional literary hermeneutics.
There is also a movement that seeks to read digital or electronic literature closely (which is not the same as close reading literature, digitally). For instance, Pressman has recently turned to the ways in which various contemporary works of e-literature remodel modernist texts into documents that provide “immanent critiques of their technocultural context.”68 Others, such as Hayles, have brought new-media approaches to the study of contemporary novels, such as Mark Z. Danielewski’s House of Leaves (2000), over the course of several pieces, noting the emulation of digital technological artifacts within such works.69 Zara Dinnen has recently shown how digital technologies have seeped into much contemporary fiction in a way that appears so normalized as to be almost banal, ways that make us feel as though digital media are nonmediating forms.70 Such methods do not necessarily use digital or quantitative approaches to study conventional works of print literature but instead use conventional humanistic techniques to analyze works that take advantage of the digital medium or digital technologies and their representations.
Yet those works that do use digital methods to close read are for the most part distinctly digital humanities pieces or visualization art forms in their own right. This is not to denigrate, as some do, this area of digital humanities practice.71 It is to point out that it can be difficult to reintegrate the two disciplinary spaces. This comes in part from the fact that, as Alan Liu has noted, those who value close reading have often sneered at activities in the virtual space, branding them the antithesis to their practices: “browsing, chatting, and affiliated modes of Net usage” are scorned as supposed “casual, quick act[s] of half-attention” and “easy consumption.”72 Furthermore, as David Hoover put it in 2007, there has been a consistent “marginalization of textual analysis and other text-centered approaches” that has pushed microlevel digital analyses of texts out of the mainstream.73 Stephen Ramsay also laments that the “digital revolution, for all its wonders, has not penetrated the core activity of literary studies.”74 Ted Underwood points out that people have not really “tuned in very much yet” to such approaches, “beyond superficial controversies about close reading—distant reading.”75 Indeed, Andrew Jewell and Brian L. Pytlik Zillig write that they are aware “of only a handful of scholars who use text analysis in their literary criticism.”76 Perhaps, they write mournfully, someday “scholars will publish wide-ranging articles on broad themes and close readings using textual analysis.”77 I agree, for the results that can be obtained would, I contend, often be of interest to literary scholars; however, the disciplinary structures of the digital humanities make it difficult to reintegrate such work with mainstream literary criticism. Although I do not intend to rehash the many debates about digital humanities’ bounded autonomy, what I aim to achieve in this book is a series of close-reading exercises that use computational techniques but that, in so doing, alienate neither the reader from the text nor the findings from mainstream literary criticism.78
In other words, through computational reading and following the calls of Alan Liu and Tanya E. Clement, this book goes “back to the text.”79 I here deploy a move away from identity thinking between texts and historical trends in order to recover the specific and the unique. For, as Theodor W. Adorno writes on a point to which I will return, “objects do not go into their concepts without leaving a remainder.”80 The same is true for literature, and this book seeks to recover for computational methods those textual remainders, those overspills that may be anomalies in terms of broad-scale history but that lend literary works their singularity.81 This book uses digital methods where they are helpful and appropriate for close textual attention but abandons such approaches when they become overly forced; I aim to avoid everything looking like a technological nail, just because I have a digital hammer, as Alison Booth has recently put it.82
In particular, this book interrogates one specific text that I have chosen as an exemplar for the methods deployed herein: the popular, award-winning, and genre-bending contemporary novel by David Mitchell, Cloud Atlas (2004).83 Mitchell is the author of eight novels at the time of this writing (one of which, From Me Flows What You Call Time, will not be published until the year 2114 as part of the Future Library project).84 But his third novel, Cloud Atlas—which Kristian Shaw labels the second in a global trilogy including Ghostwritten (1999) and The Bone Clocks (2014)—deals with a vast and (aptly) telescopic history.85 This novel is divided into six generically distinct registers with a pyramid-style cascade toward the future in which each section breaks halfway only to move to the next chapter, providing an innovative formal mechanism. Certainly, others have also recognized Mitchell’s text as a historico-generic work. Casey Shoop and Dermot Ryan, for instance, locate the novel within the space of “Big History,” a mode that aims to survey the whole of human time.86 Fredric Jameson, likewise, refers to the generic divisions of the novel in terms of a massive-scale imagined elevator that stops on “disparate floors on its way to the far future.”87 This is a text concerned with the ideas of time compression and the finitude of humanity in relation to the scale of history of which I have already spoken; Rose Harris-Birtill has even cited Mitchell’s own use of the metaphor of the telescope to describe his fictional macroverse.88 Cloud Atlas is, in a sense, a novel that performs a distant reading of world history and its future projection.
Many critics of the novel have remarked on its linguistic play and on Mitchell’s seemingly protean ability to shift between generic moods at will.89 In this sense Mitchell’s novel contains multitudes; it is a case study for genre-shifting and multiple styles within a single novel even as it is templated and based on “recurring pattern[s].”90 Critics have also noted the novel’s incursion into digital space, with its imitations of new-media ecologies that John Shanahan has called the text’s “digital transcendentalism.”91 It is, then, the way in which Cloud Atlas mediates a colossal philosophical historiography through minute and detailed attention to linguistic morphology within a new-media frame that attracted me to adopt the novel for a study of what might be possible for digital close reading. Cloud Atlas seems to effect the very compression of reading-labor time that is desired from computational approaches to big literary history through its language games and condensed world-historical progression.
For those unfamiliar with this novel it is worth a brief detour to explain its narrative progression. Cloud Atlas is composed of six distinct chapters: “The Pacific Journal of Adam Ewing,” “Letters from Zedelghem,” “Half-Lives: The First Luisa Rey Mystery,” “The Ghastly Ordeal of Timothy Cavendish,” “An Orison of Sonmi~451,” and “Sloosha’s Crossin’ an’ Ev’rythin’ After.” The text jumps, sometimes midsentence, from one chapter to the next, only to resume that narrative strand at the corresponding opposite end of the novel. Historical time moves forward from the 1850s through the twentieth century and into two final speculative future time periods, before cascading back down through time to the 1850s. This crossover between narrative and historical time is shown in figure 1.
While this narrative structure is, on its own, quirky (although Mitchell rejects the “experimental” label), the linguistic profile of each chapter is also supposedly mimetic of the time period in which the section is set.92 Hence, when a chapter is set in the 1850s, the narrative voice is altered to the language of the time.93 When writing of a postapocalyptic future, Mitchell’s language is degraded, regressing to a phonocentric transcriptive model. This language shift is akin to what Brian McHale has called “genre poaching” within a “mediated historiography”—an appropriation of linguistico-cultural mimesis for each time period (albeit that a text set in the future cannot “appropriate” the literary style of that future without speculation).94
It is also clear that Mitchell had particular literary sources in mind for each chapter when he wrote the novel, pulling these out in interviews and even embedding some clues within the text itself (see table 1).95 Because Mitchell’s chapters have certain literary lineages, it might be fairer to class this text less as a mediated historiography, although it certainly is that, than as a novel that, through its cultural appropriation, is about genre. This is a mode that I have elsewhere called “taxonomographic metafiction”: “fiction about fiction that deals with the study/construction of genre/taxonomy.”96
This genre-construction, though, also has a unique applicability for the work to which I turn in Chapter 2, on reading genre computationally (I also address in that chapter the thorny question of what we actually mean by genre). For, in its heterogeneity, but nonetheless single authorship, Cloud Atlas gives us a way to ask what happens when authorship-attribution techniques are applied in adversarial settings against generic divergence. This novel further allows us to ask questions of historical fiction, linguistico-mimetic accuracy, and realist detail, as I do in Chapter 3. It is also a text with a curious publishing history that invites comment on textual variance and (the lack of) textual scholarship more broadly in the field of contemporary fiction. In its plurality Cloud Atlas is a fantastic playground in which to test a range of answers to many questions.
Of course, Cloud Atlas is hardly the only novel to adopt such an approach; John Mullan points to Michael Cunningham’s The Hours (1998) and Iain Pears’s The Dream of Scipio (2002) as precursors, even while he admits that he “cannot think of another novel that is so formally divided up between genres” as Cloud Atlas.97 Dan Simmons’s Hyperion (1989) also immediately springs to mind alongside the generic hybridity of Jonathan Lethem in novels such as Gun, with Occasional Music (1994). One could argue further that Jennifer Egan’s A Visit from the Goon Squad (2010) moves toward this plurality, no matter how difficult it may be to classify that text as a “novel” as opposed to a “short-story cycle.” Even a novel such as Roberto Bolaño’s monumental 2666 (2004) distinctly changes in linguistic register among its constituent parts. Audre Lorde’s Zami: A New Spelling of My Name (1982) and Josephine Tey’s The Daughter of Time (1951) perform similar historico-generic leaps. In absolute terms Alan Moore’s Jerusalem (2016) probably goes furthest toward a similar mode of generic hybridity, with its excessive overloading of voices.98 Yet Mitchell’s novel is emblematic precisely for its measured and neatly divided multigeneric mode; it is the example par excellence of such genre shifting. The novel is not one of genre crossing, in which new hyphenated genre identities are born, but one of multigenericity.
There are, however, important political challenges in writing a single-author (or, in fact, single-novel) study. In my undertaking to focus a set of computational practices back on the text, there is also a refocusing on a white, British, male, middle-class, heterosexual author.99 While Mitchell’s oeuvre certainly does not shy away from exploring issues of postcoloniality, racism, class and labor divisions, disability, gender, sexuality, and other identity forms, I am conscious of the ways in which this study closes in on one particular strand of authorial entity that has already been overprivileged in critical history. I am also acutely aware of the potential difficulties of writing about changes in genre fiction and its practice within a work of remarkably self-conscious literary fiction. For Cloud Atlas does not pass itself off as a work of genre fiction; it has high aspirations, albeit without snobbishness, gesturing toward postmodernism (explicitly mentioned in the novel) and other high-cultural reference points (such as Arnold Schoenberg). In addition to these challenges of authorial identity there is a distinct privileging in much of the critical scholarship of a “literary” work over “genre fiction” in which I neither believe nor wish to be invested. The novel, in fact, even mocks itself (to some extent) on this front in Adam Ewing’s closing lines that interpellate the reader into a specific assumed social position: “You & I, the moneyed, the privileged, the fortunate.”100 Yet, for the formalist reasons of genre outlined above, it is Cloud Atlas to which I have turned, a work whose multigenericity acts as a near-perfect and unparalleled arena for the formalist trials to which I subject the text.
Using a literary-computational microscope in the contemporary world involves a great deal of work. Indeed, just to study this one novel was much more work than any other literary-critical project I have ever undertaken. I began this project as a unified endeavor shortly after discovering the substantial textual variants between the editions that are detailed in Chapter 1. I quickly realized, though, that if I wanted to conduct further work on the novel, I would require a digital, plain-text version of the book.101
How to obtain this? The majority of literary works on which others conduct computational research are out of copyright and so can be freely circulated online. Many are on the excellent Project Gutenberg site. In my case I had an Amazon Kindle version of Mitchell’s contemporary, in-copyright novel (complete with Digital Rights Management [DRM] protection) and a Sceptre paperback edition. The digital protections on the Kindle text, however, make the format unsuitable for the types of textual experiment with which I wished to engage. I needed a version that was unencumbered. The seemingly obvious solution was to remove these DRM protections. At around the same time, I began supervising a graduate student, Erik Ketzan, who also happens to be an accredited legal professional. In an informal capacity Erik brought a very specific legal problem to my attention.
In the UK, as of 2017, there is a provision in law that implements EU Directive 2001/29/EC.102 This dry directive states that it is a criminal offense to break the DRM on digital files. In other words, it is illegal for me, even for personal or research purposes, to remove the DRM from an Amazon Kindle file. Neither author nor publisher nor any other rightsholder could, therefore, grant me permission to remove the DRM and absolve me of a criminal offense (which would contravene the research ethics procedures at my university). That said, there are supposed to be protections in the directive to allow personal use or research on such texts. Indeed, the act states:
Notwithstanding the legal protection provided for in paragraph 1, in the absence of voluntary measures taken by rightsholders, including agreements between rightsholders and other parties concerned, Member States shall take appropriate measures to ensure that rightsholders make available to the beneficiary of an exception or limitation provided for in national law in accordance with Article 5(2)(a), (2)(c), (2)(d), (2)(e), (3)(a), (3)(b) or (3)(e) the means of benefiting from that exception or limitation, to the extent necessary to benefit from that exception or limitation and where that beneficiary has legal access to the protected work or subject-matter concerned.
In the UK this is implemented in Section S296ZE of the Copyright, Designs and Patents Act. Section S296ZE provides a way to contest situations wherein the rightsholder’s Technological Protection Measures prevent an authorized exempted use, thereby implementing the EU directive. This involves a twofold process: (1) asking a publisher to voluntarily provide a copy that can be used in such a way and (2) contacting the secretary of state to ask for a directive to yield a way of benefiting from the exemption on Kindle format books for noncommercial academic research purposes. This process is very time-consuming and typically has little chance of providing the desired exemption.103
Because I wanted to get the work under way, rather than risking an unsuccessful and lengthy governmental appeal process, I had two options: (1) I could scan the text from the paperback, then run an Optical Character Recognition (OCR) process on the text, and then finally reread the novel and correct any errors alongside my digital version; or (2) I could manually retype the text from the Kindle or paperback editions. I eventually settled on the latter option, since I wanted to work on the Kindle variant of the work, knowing, as I do, that the UK paperback is substantially different but that the Kindle edition mirrors the US paperback. As a result, I spent many days retyping and then thoroughly checking a digital copy of Cloud Atlas. This was both a tiring and tiresome endeavor, and I hope that, at some point in an enlightened future, digital versions of in-copyright texts might be available to purchase in forms that will allow computational research to be conducted on them, as we have seen with recent moves in the HathiTrust archive. For now, though, suffice it to say that it remains an incredibly labor-intensive process even to get to the point where one has a research object on which to work.
This is why I refer, though, to the techniques conducted herein as a microscope rather than any kind of “distant reading” that might save me the work of actual reading. For it has saved me no reading labor using computational methods to study a single contemporary text that is under copyright. Indeed, in retyping the novel, I have read the text more closely than I have read any other novel. Yet, without the computational methods, I still could not see, as per my somewhat ironic epigraphic reference to Matthew B. Crawford’s book on the value of manual labor. As Crawford puts it, it is as though the computational methods provided “the pertinent framework of meaning” that, previously, “I lacked the knowledge to see.” Indeed, the methods that I use here—and that I have written the software to be able to use—perform a very old literary-critical function: through a type of deformative reconstruction, they make clear something that was directly under our noses but that still required elucidation.104 In Crawford’s phrasing, “the raw sensual data reaching my eye before and after are the same,” but, prior to the retyping exercise and before I trained my computational methods on the text, “the features in question [were] invisible.” Yet once such features “have been pointed out, it seems impossible that I should not have seen them before.” The computational micro-, rather than macro-, scope can teach us things about texts that we could see with our own eyes were we infinitely patient and infinitely obsessive. But I am neither of these things, so I need the iterative microscope.
The techniques in this book that I call close-textual digital microscopy in some ways increase the possibilities in rebalancing the labor-death ratio. On the one hand, they still perform tasks that are too tedious for humans to undertake manually. On the other hand, at least for contemporary fiction, there is still an exceptionally lengthy, difficult, and time-consuming process to be undertaken before it is legally possible to use such techniques.
There are also some philosophical challenges with close-textual digital microscopy. What do we see through a microscope, and how can we be sure that the very ways of looking do not just mirror our own concerns? That is, how can we avoid the criticism that Adorno leveled at applied philosophy that such a method might read “out of works that it has invested with an air of concretion nothing but its own theses”?105 Indeed, the epistemology of the microscope is an apt metaphor for the literary processes to which I turn here. Ian Hacking has comprehensively examined the ways of seeing with a microscope and concludes that while the microscope offers no insight on scientific realism—that is, whether what we see with a microscope is real—the instrument nonetheless provides a good “map of interactions between the specimen and the image of radiation.”106 Even if the “images” produced through the optic or literary-textual microscope contain artifacts of the invasive seeing epistemologies that we deploy, these can be dispelled, Hacking argues, through intersubjective confirmation.107 For this reason, at the close of this book I offer up the underlying data from my microscopic experiments for others to scrutinize and reverify as they so choose. The source code for the programs that I have written—the blueprints for the microscopes—are also openly available online for inspection.
This is a book, then, in which I examine the publishing history, generic styling, and approaches to the interpretation of Cloud Atlas with the help of a range of computational methods. It is an attempt to drill down into the editorial changes that the novel underwent and to create a graphic stemma of textual variance and different editions. It is also an attempt to pinpoint more precisely the generic language tricks that Mitchell uses to create his shape-shifting novel, while isolating meaningful, distinguishing discriminators from consistent factors throughout his prose. It is also an effort to understand linguistic mimesis in historical fiction. To undertake these tasks, I use a range of techniques.
Although I work in this book on a single novel, for those who wish to think in larger terms, the computational methods I present herein can certainly be seen as exercises in methodological development and extrapolated to other texts. The methods that I develop vary in their mathematical and computational complexity, from the heights of Chapter 2 and authorship/genre attribution, down to the simpler, more repetitive techniques of Chapter 3. This book is not, however, a “how to” guide for text analysis. For that, I would recommend Jockers’s Text Analysis with R for Students of Literature.108
As is customary—for I have not abandoned all tradition—I will close this introduction with an outline of the work’s progression. This book is structured around a series of questions and answers that correlate roughly to the chapters. The questions pertain broadly to textual scholarship, to the syntax of genre, and to the language of historical fiction. The first of these questions is, What is the place of textual scholarship in contemporary fiction, and how might digital techniques aid us in understanding textual variance? Such matters have recently come to prominence in the field of contemporary literary studies. For instance, in his “Contemporary Fiction: Towards a Manifesto,” published in Textual Practice, Robert Eaglestone laments that one understudied aspect of “contemporary fiction is what we might call the ‘contemporary history of the book’: the ways in which the business of publishing helps to shape and control contemporary fiction. There seems to be a dearth of research into this aspect of the field.”109
In response to this rallying cry for greater engagement with literary sociology (the future of the history of the book, as Matthew Kirschenbaum termed it at his recent books.files event), it is on textual scholarship that I first train the digital microscope: the publishing history and version variants of Cloud Atlas. In 2003 David Mitchell’s editorial contact at the US branch of Random House moved from the publisher, leaving the American edition of Cloud Atlas without an editor for approximately three months. Meanwhile, the UK edition of the manuscript was undergoing a series of editorial changes and rewrites that were never synchronized back into the US edition of the text. When the process was resumed at Random House under the editorial guidance of David Ebershoff, changes from New York were likewise not imported back into the UK edition. In the section entitled “An Orison of Sonmi~451,” these desynchronized rewritings of UK and US paperbacks/electronic editions differ significantly—indeed, almost totally—at the level of linguistic expression, and there are a range of subepisodes that feature in only one or the other of the published editions.
The digital component that I here introduce is a novel method for the visualization of differences between texts. In fact, close focus on the variations between the versions of a novel is precisely the type of narrow depth for which I would say that computational methods can be most helpful. It is extremely difficult to succinctly explain, using conventional textual argument, precisely what has happened in the rewriting and editing of different editions. This is why complex, and to the layperson seemingly impenetrable, notations have been introduced into critical editions. The visualization that I deploy here, however, aids us greatly in understanding the ways in which the narrative is reflowed and ordered between the versions of the novel.
The second structuring question of this book pertains to the syntax and contexts of genre: What can a computational formalism tell us about genre? Answering this question requires a detailed discussion of the underlying assumptions and limitations of computational stylometry. For instance, we might ask whether there really exists such a thing as a “stylistic naturalism” for every author or whether stylometry even actually measures subconsciously inscribed features of a text. That is to say, the linguistic and syntactic elements of most computational profiling are based on suppositions of authorship attribution, yet we too frequently overlook the implicit assumptions beneath such methods, which, I argue, actually give us a better insight into genre than into authorship.
I go on to show that authorship attribution techniques incorrectly cluster the chapters of Cloud Atlas as distinct “authors” using anything above the twenty most common words. This has implications for understandings of literary style and authorship since there is a consensus from what is usually used as an authorship attribution algorithm that Mitchell’s generic segments are written by different authors. Instead, I conduct a context-free analysis of the syntax of genre using a part-of-speech (PoS) trigram visualization and analysis. This is a way of making visually clear part-of-speech formulations that appear only in one section of the text or another with a specific degree of additional frequency. Overall, this line of inquiry is designed to demonstrate the ways in which we can understand the microtectonic linguistic shifts in novels, through a set of computational methods. I here ask fundamental questions of what we mean by “literary style” and by what measures we can group different forms of writing. I use visualization-finding tools to locate points of interest, which are then resynthesized into close readings.
The last question that I ask in this book is, What does it mean to write as though in some bygone period? That is, how do issues of mimetic accuracy in historical fiction that purports to come from a particular time frame intersect with the formal elements of literary language and linguistics? Specifically, the first section of Cloud Atlas claims c. 1850 as its setting. Narrative clues date the intradiegetic diary object of this chapter to approximately the period between 1850 and 1910. This chapter argues, however, for the construction of a stylistic historical imaginary of this period’s language that is not based on mimetic etymological accuracy. Using word-dating software that I developed, I appraise the etymological availability of Mitchell’s terms for his characters and uncover substantial anachronism. I also show that racist and colonial terms occur with much greater frequency in Cloud Atlas than in a broader contemporary textual corpus, indicating that the construction of imagined historical style likely rests more on infrequent word use and thematic terms from (one would hope) outmoded discourses than on etymological mimesis. In closing, I argue that there are political implications to the “puncturing” of linguistic accuracy for our consideration of Ewing’s journal and its colonial rhetorics.
The final argument that I make takes the preceding computational analyses and synthesizes the results into a close reading of Cloud Atlas that focuses on the idea of the object-mediated “archive” as central to the novel’s depiction of alternation between the historically unique and the pattern-making efforts of historiography. In closing, I argue that this extends even further into a fragmentation and puncturing of history. While other critics—Caroline Edwards, Courtney Hopf, Patrick O’Donnell, and many others—have discussed the work of time and history in this novel, I argue for a most specific placement of the reader at the end of history, in a particular sense.110 The mimetic cracks in the language of “The Pacific Journal of Adam Ewing” call for a reinterrogation of the “authenticity” of various performed-language contexts.
As a closing word: one early reader of this book remarked that my method herein is unusual, a mode in which I “pose questions, pursue and develop lines of analysis in response, which then lead to the next query, and the next analysis”—a mode of “writing narratives that tell stories.” There is a certain amount of truth to this, although I am not sure that I can claim it to be such a distinguishing feature as the reviewer kindly asserted. In spite of this, it is correct that this book is not a sustained argument in the traditional literary-critical sense. I do not come, for the most part, with a preformed argument that I seek to validate through recourse to the text. (At one point in Chapter 3, for instance, I work through a method that proves not to work: a documentation of process for a null result.) Instead, I aim to chart the questions that the literary text prompted in me and to show how a series of computational methods at the microlevel allowed me to find some, albeit nondefinitive, answers to those questions. This could be called, as it was by this reader, a “critical or theoretical investigative reporting.” In the last chapter I do resynthesize these undertakings within a more conventional literary-critical frame. But in documenting questions, processes, findings, failures, and interpretations in one volume—across the spaces of textual scholarship, genre and language, and historical fiction/mimesis—I take a more reflexive stance on the limitations of my various methods. After all, as one of my favorite-titled articles on the importance of modesty in one’s digital claims puts it: it does not do to pitch too many hardballs, with so few going over the plate.111
1. See Wendy Hui Kyong Chun et al., “The Dark Side of the Digital Humanities,” in Debates in the Digital Humanities 2016, ed. Matthew K. Gold and Lauren F. Klein (Minneapolis: University of Minnesota Press, 2016), 493–509, http://dhdebates.gc.cuny.edu/debates/text/89; Richard Grusin, “The Dark Side of Digital Humanities: Dispatches from Two Recent MLA Conventions,” Differences 25, no. 1 (2014): 79–92, https://doi.org/10.1215/10407391-2420009; Janneke Adema and Gary Hall, “Posthumanities: The Dark Side of ‘The Dark Side of the Digital,’” Journal of Electronic Publishing 19, no. 2 (2016): http://dx.doi.org/10.3998/3336451.0019.201; and Daniel Allington, Sarah Brouillette, and David Golumbia, “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” Los Angeles Review of Books, May 1, 2016, https://lareviewofbooks.org/article/neoliberal-tools-archives-political-history-digital-humanities/. For a good range of counterresponses see “Editors’ Choice: Round-Up of Responses to ‘The LA Neoliberal Tools (and Archives),’” Digital Humanities Now, May 3, 2016, http://digitalhumanitiesnow.org/2016/05/editors-choice-round-up-of-responses-to-the-la-neoliberal-tools-and-archives.
2. See Matthew L. Jockers, Macroanalysis: Digital Methods and Literary History (Urbana: University of Illinois Press, 2013); Franco Moretti, Distant Reading (London: Verso, 2013); and Franco Moretti, Graphs, Maps, Trees: Abstract Models for Literary History (London: Verso, 2007).
3. So many pieces have now been written on how to define the so-called digital humanities that there is even a reader on the topic: Melissa M. Terras, Julianne Nyhan, and Edward Vanhoutte, eds., Defining Digital Humanities: A Reader (Surrey: Ashgate, 2013). While I do not opt here for a lengthy overall definition of the digital humanities, the methods to which I will be referring should become clear over this introduction.
4. See Mario Livio, The Golden Ratio: The Story of Phi, the World’s Most Astonishing Number (New York: Broadway, 2003).
5. Annette Vee, “‘Literary Analysis by Computer’ Offered at Dartmouth, Winter 1969, Working with Paradise Lost. #1960sComputing Pic.Twitter.Com/DPrnY23cpU,” Tweet, @anetv (blog), Oct. 4, 2017, https://twitter.com/anetv/status/919219418189660160/photo/1.
6. Nicholas Dames, The Physiology of the Novel: Reading, Neural Science, and the Form of Victorian Fiction (Oxford: Oxford University Press, 2007), 188; Vernon Lee, The Handling of Words (London: Bodley Head, 1923), https://gutenberg.ca/ebooks/lee-handling/lee-handling-00-h.html#ch_VI.
7. W. Moskalew, Formular Language and Poetic Design in the “Aeneid” (Leiden: Brill, 1982), 21.
8. See, e.g., Timothy Brennan, “The Digital-Humanities Bust,” Chronicle of Higher Education, Oct. 15, 2017, www.chronicle.com/article/The-Digital-Humanities-Bust/241424. Brennan argues that counting the word whale in Moby-Dick tells us nothing more than that there are seventeen hundred occurrences of the word whale in Moby-Dick. By way of critique, Brennan’s article is curious in espousing a need for utility from digital projects that he seems also to claim should not be the aim of the humanities disciplines. For a good response to Brennan see Sarah E. Bond, Hoyt Long, and Ted Underwood, “‘Digital’ Is Not the Opposite of ‘Humanities,’” Chronicle of Higher Education, Nov. 1, 2017, www.chronicle.com/article/Digital-Is-Not-the/241634.
9. Laura Miller, “Take Notes, Nate Silver! Reinventing Literary Criticism with Computers,” Salon, April 23, 2014, www.salon.com/2014/04/23/learning_from_failed_books.
10. See, e.g., Mehrdad Yazdani, Jay Chow, and Lev Manovich, “Quantifying the Development of User-Generated Art During 2001–2010,” PLOS ONE 12, no. 8 (2017): e0175350, https://doi.org/10.1371/journal.pone.0175350.
11. Franco Moretti, “The Slaughterhouse of Literature,” Modern Language Quarterly 61, no. 1 (2000): 207–27.
12. Stephen Ramsay, “The Hermeneutics of Screwing Around; or What You Do with a Million Books,” in Pastplay: Teaching and Learning History with Technology, ed. Kevin Kee (Ann Arbor: University of Michigan Press, 2014), 110–20, https://quod.lib.umich.edu/d/dh/12544152.0001.001/1:5/—pastplay-teaching-and-learning-history-with-technology?g=dculture;rgn=div1;view=fulltext;xc=1; Ted Underwood, “The Life Cycles of Genres,” Journal of Cultural Analytics, May 23, 2016, https://doi.org/10.22148/16.005; Ted Underwood and Jordan Sellers, “The Longue Durée of Literary Prestige,” Modern Language Quarterly 77, no. 3 (2016): 321–44, https://doi.org/10.1215/00267929-3570634.
13. Richard Rogers, Digital Methods (Cambridge, MA: MIT Press, 2015), 1.
14. Lisa Gitelman, ed., “Raw Data” Is an Oxymoron (Cambridge, MA: MIT Press, 2013).
15. Wai Chee Dimock, Through Other Continents: American Literature Across Deep Time (Princeton, NJ: Princeton University Press, 2008), 79.
16. See Amy Hungerford, Making Literature Now (Stanford, CA: Stanford University Press, 2016).
17. For more on these forms see Eileen Gardiner and Ronald G. Musto, The Digital Humanities: A Primer for Students and Scholars (New York: Cambridge University Press, 2015), 142–45; Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Urbana: University of Illinois Press, 2011); N. Katherine Hayles, Writing Machines, Mediawork pamphlet (Cambridge, MA: MIT Press, 2002); and Johanna Drucker, “Why Distant Reading Isn’t,” PMLA 132, no. 3 (2017): 628–35.
18. For more on these estimates see Erik Fredner, “How Many Novels Have Been Published in English? (An Attempt),” Stanford Literary Lab (blog), March 14, 2017, https://litlab.stanford.edu/how-many-novels-have-been-published-in-english-an-attempt/; figures for life expectancy come from the World Health Organization, “Life Expectancy,” 2015, www.who.int/gho/mortality_burden_disease/life_tables/situation_trends/en/.
19. Lisa Marie Rhody, “Beyond Darwinian Distance: Situating Distant Reading in a Feminist Ut Pictura Poesis Tradition,” PMLA 132, no. 3 (2017): 659.
20. That said, the development of distant techniques requires the methods to be refined on individual texts. See, e.g., Matthew L. Jockers, “A Novel Method for Detecting Plot,” June 5, 2014, www.matthewjockers.net/2014/06/05/a-novel-method-for-detecting-plot, where the author uses a method on a small number of individual texts to refine an automatic technique that could then be used on a grand scale. Note also that there was extensive subsequent discussion and debate around the technicalities of this method, particularly the use of the low-pass filter. For more see Matthew L. Jockers, “Requiem for a Low Pass Filter,” April 6, 2015, www.matthewjockers.net/2015/04/06/epilogue; Annie Swafford, “Continuing the Syuzhet Discussion,” Anglophile in Academia: Annie Swafford’s Blog (blog), March 7, 2015, https://annieswafford.wordpress.com/2015/03/07/continuingsyuzhet; Matthew L. Jockers, “Resurrecting a Low Pass Filter (Well, Kind Of),” Jan. 12, 2017, www.matthewjockers.net/2017/01/12/resurrecting/; Benjamin M. Schmidt, “Do Digital Humanists Need to Understand Algorithms?” in Debates in the Digital Humanities 2016, ed. Matthew K. Gold and Lauren F. Klein (Minneapolis: University of Minnesota Press, 2016), 546–55, http://dhdebates.gc.cuny.edu/debates/text/99.
21. Richard Jean So, “All Models Are Wrong,” PMLA 132, no. 3 (2017): 671; see also Andrew Piper, “Think Small: On Literary Modeling,” PMLA 132, no. 3 (2017): 651–58.
22. Margaret Cohen, The Sentimental Education of the Novel (Princeton, NJ: Princeton University Press, 2002), 23.
23. Unbeknownst to me at the time of this writing, Ernesto Priego had made this same observation about the “opposite” of distant reading at a recent conference.
26. Close reading has been perceived as under threat from other areas, too, such as the resurgence of literary-historical archival work. See Jane Gallop, “The Historicization of Literary Studies and the Fate of Close Reading,” Profession (2007): 181–86, https://doi.org/10.1632/prof.2007.2007.1.181.
27. Moretti, Distant Reading, 48, 65.
28. Moretti, 67, 48.
29. For more on this see Matthew Wickman, “Theology Still?” PMLA 132, no. 3 (2017): 674–80; Robert Alter, Canon and Creativity: Modern Writing and the Authority of Scripture (New Haven, CT: Yale University Press, 2000); and Charles Altieri, Canons and Consequences: Reflections on the Ethical Force of Imaginative Ideals (Evanston, IL: Northwestern University Press, 1990).
30. Shawna Ross, “In Praise of Overstating the Case: A Review of Franco Moretti, Distant Reading (London: Verso, 2013),” Digital Humanities Quarterly 8, no. 1 (2014): www.digitalhumanities.org/dhq/vol/8/1/000171/000171.html. My account here of some of the oppositions to close reading in various new media and digital humanities spaces is indebted to this work.
31. Lev Manovich, The Language of New Media (Cambridge, MA: MIT Press, 2002), 199.
33. Matthew Wilkens, “Canons, Close Reading, and the Evolution of Method,” in Debates in the Digital Humanities, ed. Matthew K. Gold (Minneapolis: University of Minnesota Press, 2012), 256, http://dhdebates.gc.cuny.edu/debates/part/5.
34. Peter Middleton, Distant Reading: Performance, Readership, and Consumption in Contemporary Poetry (Tuscaloosa: University of Alabama Press, 2005), 5.
35. Andrew Goldstone, “The Doxa of Reading,” PMLA 132, no. 3 (2017): 641.
37. Jessica Pressman, Digital Modernism: Making It New in New Media (New York: Oxford University Press, 2014), 11.
38. Ted Underwood, Why Literary Periods Mattered: Historical Contrast and the Prestige of English Studies (Stanford, CA: Stanford University Press, 2013), 81; see also Franklin E. Court, Institutionalizing English Literature: Culture and Politics of Literary Study, 1750–1900 (Stanford, CA: Stanford University Press, 1992); and Gerald Graff, Professing Literature: An Institutional History (Chicago: University of Chicago Press, 1989).
39. I. A Richards, Principles of Literary Criticism (London: Routledge, 2001), 158.
40. I. A. Richards, How to Read a Page: A Course in Efficient Reading with an Introduction to a Hundred Great Words (Boston: Beacon, 1959), 15.
41. Nancy Armstrong and Warren Montag, “‘The Figure in the Carpet,’” PMLA 132, no. 3 (2017): 617.
42. There was certainly a broader geist at work around the time of Reading Capital, however, that sought to interrogate reading methods, say with the publication of Roland Barthes’s S/Z (Paris: Seuil, 1970).
43. Louis Althusser et al., Reading Capital: The Complete Edition, trans. Ben Brewster and David Fernbach (London: Verso, 2015), 17 (italics in original).
44. Althusser et al., 32.
45. Althusser et al., 27.
46. Althusser et al., 69.
47. David Stewart, “The Hermeneutics of Suspicion,” Literature and Theology 3, no. 3 (1989): 303; Cathy N. Davidson and David Theo Goldberg, “Engaging the Humanities,” Profession (2004): 45, www.jstor.org/stable/25595777.
49. N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago: University of Chicago Press, 2012), 59.
51. Rita Felski, The Limits of Critique (Chicago: University of Chicago Press, 2015), 31; Paul Ricoeur, Freud and Philosophy: An Essay on Interpretation, trans. Denis Savage (New Haven, CT: Yale University Press, 1979). In its own right Ricoeur’s work has generated a vast and ongoing body of secondary critical literature, including Hans-Georg Gadamer, “The Hermeneutics of Suspicion,” in Phenomenology and the Human Sciences, ed. J. N. Mohanty (Dordrecht: Springer Netherlands, 1984), 73–83, https://doi.org/10.1007/978-94-009-5081-8_6; Brian Leiter, “The Hermeneutics of Suspicion: Recovering Marx, Nietzsche, and Freud,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March 23, 2005), https://papers.ssrn.com/abstract=691002; and Alison Scott-Baumann, Ricoeur and the Hermeneutics of Suspicion (London: Continuum, 2009). Anthony C. Thiselton, New Horizons in Hermeneutics: The Theory and Practice of Transforming Biblical Reading (Grand Rapids, MI: Zondervan, 1998), 344–51, is unusually sound in its contextualization of the phrase.
52. Felski, The Limits of Critique, 31.
53. Scott-Baumann, Ricoeur and the Hermeneutics of Suspicion, 63; Don Ihde, Hermeneutic Phenomenology: The Philosophy of Paul Ricoeur (Evanston, IL: Northwestern University Press, 1980).
54. Paul Ricoeur, “Biblical Hermeneutics,” Semeia 4 (1975): 29–148; Paul Ricoeur, The Rule of Metaphor: Multi-disciplinary Studies of the Creation of Meaning in Language, trans. R. Czerny (Toronto: University of Toronto Press, 1975), 285; Charles E. Reagan, “Interview with Paul Ricoeur (Recorded 1982),” in Paul Ricoeur: His Life and Work (Chicago: University Of Chicago Press, 1996), 105; I owe all this research to Scott-Baumann, Ricoeur and the Hermeneutics of Suspicion, 63–67.
55. Scott-Baumann, 66.
56. The phrases “school of suspicion,” “tactic of suspicion,” “exercise of suspicion,” and “masters of suspicion” appear in Ricoeur, Freud and Philosophy, 37, 39, 43, 44, 75.
57. Felski, The Limits of Critique, 3.
58. Felski, 30.
59. Culler, “The Closeness of Close Reading,” 22.
60. For just one example essay on the types of new questions that are being raised by close reading in a digital environment, see David Ciccoricco, “The Materialities of Close Reading: 1942, 1959, 2009,” Digital Humanities Quarterly 6, no. 1 (2012): www.digitalhumanities.org/dhq/vol/6/1/000113/000113.html.
61. Dimock, Through Other Continents, 79.
62. John Burrows, Computation into Criticism: A Study of Jane Austen’s Novels and an Experiment in Method (London: Clarendon, 1987).
63. My thanks to Ted Underwood for pointing this out. See Eviatar Zerubavel, The Clockwork Muse: A Practical Guide to Writing Theses, Dissertations, and Books (Cambridge, MA: Harvard University Press, 1999); and Mark Olsen, “Signs, Symbols and Discourses: A New Direction for Computer-Aided Literature Studies,” Computers and the Humanities 27, no. 5/6 (1993): 309–14.
64. Catherine Nicholson, “Algorithm and Analogy: Distant Reading in 1598,” PMLA 132, no. 3 (2017): 643–50.
65. Lisa Pearl, Kristine Lu, and Anousheh Haghighi, “The Character in the Letter: Epistolary Attribution in Samuel Richardson’s Clarissa,” Digital Scholarship in the Humanities 32, no. 1 (2016): 123–40, https://doi.org/10.1093/llc/fqw007; Alexander A. G. Gladwin, Matthew J. Lavin, and Daniel M. Look, “Stylometry and Collaborative Authorship: Eddy, Lovecraft, and ‘The Loved Dead,’” Digital Scholarship in the Humanities 32, no. 1 (2017): 123–40, https://doi.org/10.1093/llc/fqv026.
66. Miyuki Yamada, Yuichi Murai, and Ichiro Kumagai, “Story Visualization of Novels with Multi-theme Keyword Density Analysis,” Journal of Visualization 16, no. 3 (2013): 247–57, https://doi.org/10.1007/s12650-013-0163-4.
67. Stefanie Posavec, “Writing Without Words,” 2009, www.stefanieposavec.com/writing-without-words; Jeff Clark, “Novel Views: Les Miserables,” Neoformix, 2013, http://neoformix.com/2013/NovelViews.html.
68. Pressman, Digital Modernism, 156.
69. Hayles, Writing Machines; Hayles, How We Think; see also Jessica Pressman, “House of Leaves: Reading the Networked Novel,” Studies in American Fiction 34, no. 1 (2006): 107–28, https://doi.org/10.1353/saf.2006.0015.
70. Zara Dinnen, The Digital Banal: New Media and American Literature and Culture (New York: Columbia University Press, 2018).
71. Even so, there are questions as to whether digital humanities is actually a field. See Alan Liu, “Is Digital Humanities a Field?—An Answer from the Point of View of Language,” Journal of Siberian Federal University, Humanities and Social Sciences 7 (2013): 1546–52.
72. Alan Liu, The Laws of Cool: Knowledge Work and the Culture of Information (Chicago: University of Chicago Press, 2004), 147.
73. David Hoover, “The End of the Irrelevant Text: Electronic Texts, Linguistics, and Literary Theory,” Digital Humanities Quarterly 1, no. 2 (2007): para. 4, www.digitalhumanities.org/dhq/vol/1/2/000012/000012.html.
74. Stephen Ramsay, “Toward an Algorithmic Criticism,” in A Companion to Digital Literary Studies, ed. Ray Siemens and Susan Schreibman (New York: Wiley-Blackwell, 2013), 477–78.
75. Cosima Mattner and Ted Underwood, “They Have Completely Changed My Understanding of Literary History,” Textpraxis 14, no. 3 (2017): www.uni-muenster.de/Textpraxis/en/cosima-mattner-they-have-completely-changed-my-understanding-of-literary-history.
76. Andrew Jewell and Brian L. Pytlik Zillig, “‘Counted Out at Last’: Text Analysis on the Willa Cather Archive,” in American Literature Scholar in the Digital Age, ed. Amy E. Earhart and Andrew Jewell (Ann Arbor: University of Michigan Press, 2011), 184, https://doi.org/10.3998/etlc.9362034.0001.001.
77. Jewell and Zillig, 181. They offer as an example Tanya E. Clement, “‘A Thing Not Beginning and Not Ending’: Using Digital Tools to Distant-Read Gertrude Stein’s The Making of Americans,” Literary and Linguistic Computing 23, no. 3 (2008): 361–81, https://doi.org/10.1093/llc/fqn020. That said, Earhart and Jewell’s American Literature Scholar in the Digital Age contains many good examples of close, textual, digital practices.
78. Perhaps the work in the field of digital humanities that comes closest to what I do in this book is Johanna Drucker, SpecLab: Digital Aesthetics and Projects in Speculative Computing (Chicago: University of Chicago Press, 2009), which insists on the need for experimental practices in the humanities while also aiming at technologies, such as the “’Patacritical Demon,” that could unveil how our interpretative practices work. I also do not intend to rehash a series of debates about the “digital humanities.” For more on this see, among many others, Matthew K. Gold and Lauren F. Klein, eds., Debates in the Digital Humanities 2016 (Minneapolis: University of Minnesota Press, 2016); Susan Schreibman, Raymond George Siemens, and John Unsworth, eds., A New Companion to Digital Humanities (Chichester: John Wiley and Sons, 2016); Ray Siemens and Susan Schreibman, eds., A Companion to Digital Literary Studies (New York: Wiley-Blackwell, 2013); Ramsay, Reading Machines; Adam Koehler, Composition, Creative Writing Studies and the Digital Humanities (London: Bloomsbury, 2017); David M. Berry and Anders Fagerjord, Digital Humanities: Knowledge and Critique in a Digital Age (Cambridge: Polity, 2017); Julianne Nyhan and Andrew Flinn, Computation and the Humanities: Towards an Oral History of Digital Humanities (London: Springer, 2016); and Anne Burdick et al., Digital Humanities (Cambridge, MA: MIT Press, 2012).
79. Alan Liu, “The State of the Digital Humanities: A Report and a Critique,” Arts and Humanities in Higher Education 11, no. 1–2 (2012): 8–41, https://doi.org/10.1177/1474022211427364; Tanya E. Clement, “Text Analysis, Data Mining, and Visualizations in Literary Scholarship,” MLA Commons, Literary Studies in the Digital Age: An Evolving Anthology (New York: Modern Language Association, 2013): https://dlsanthology.mla.hcommons.org/text-analysis-data-mining-and-visualizations-in-literary-scholarship.
80. Theodor W. Adorno, Negative Dialectics, trans. E. B. Ashton (London: Routledge, 1973), 5.
81. For more on literature and singularity see Derek Attridge, The Singularity of Literature (London: Routledge, 2004).
82. Alison Booth, “Mid-range Reading: Not a Manifesto,” PMLA 132, no. 3 (2017): 620.
83. Cloud Atlas won the British Book Awards Literary Fiction Award and the Richard & Judy Book of the Year award. The novel was short-listed for the 2004 Booker Prize, the Nebula Award, and the Arthur C. Clarke Award. A film of the book achieved widespread recognition in 2012.
85. Kristian Shaw, “‘Some Magic Is Normality’: Fantastical Cosmopolitanism in David Mitchell’s The Bone Clocks,” C21 Literature: Journal of 21st Century Writings 6, no. 3 (2018): https://doi.org/10.16995/c21.52.
87. Fredric Jameson, “The Historical Novel Today, or, Is It Still Possible?” in The Antinomies of Realism (London: Verso, 2013), 303.
88. Rose Harris-Birtill, “‘Looking Down Time’s Telescope at Myself’: Reincarnation and Global Futures in David Mitchell’s Fictional Worlds,” KronoScope: The Journal for the Study of Time 17, no. 2 (2017): 163–81, https://doi.org/10.1163/15685241-12341382.
89. For just a small selection see most of the essays in Sarah Dillon, ed., David Mitchell: Critical Essays (Canterbury: Gylphi, 2011), esp. Courtney Hopf, “The Stories We Tell: Discursive Identity Through Narrative Form in Cloud Atlas” (105–26); Patrick O’Donnell, A Temporary Future: The Fiction of David Mitchell (New York: Bloomsbury Academic, 2015); and Scott Dimovitz, “The Sound of Silence: Eschatology and the Limits of the Word in David Mitchell’s Cloud Atlas,” SubStance 44, no. 1 (2015): 71–91, https://doi.org/10.1353/sub.2015.0009.
90. Peter Childs and James Green, “The Novels in Nine Parts,” in David Mitchell: Critical Essays, ed. Sarah Dillon (Canterbury: Gylphi, 2011), 33–34.
92. Stuart Jeffries, “David Mitchell: ‘I Don’t Want to Project Myself as This Great Experimenter,’” The Guardian, Feb. 8, 2013, www.guardian.co.uk/books/2013/feb/08/david-mitchell-project-great-experimenter.
93. But see Chapter 3, below.
95. Book World, “Q&A: Book World Talks with David Mitchell,” Washington Post, August 22, 2004, www.washingtonpost.com/wp-dyn/articles/A17231-2004Aug19.html; David Mitchell, “Guardian Book Club: Cloud Atlas by David Mitchell,” The Guardian, June 12, 2010, www.theguardian.com/books/2010/jun/12/book-club-mitchell-cloud-atlas; Adam Begley, “David Mitchell, The Art of Fiction No. 204,” Paris Review, Summer 2010, www.theparisreview.org/interviews/6034/the-art-of-fiction-no-204-david-mitchell; Martin Paul Eve, “‘some kind of thing it aint us but yet its in us’: David Mitchell, Russell Hoban, and Metafiction After the Millennium,” Sage Open 4, no. 1 (2014): 5–6, https://doi.org/10.1177/2158244014521636.
96. Martin Paul Eve, “‘You Will See the Logic of the Design of This’: From Historiography to Taxonomography in the Contemporary Metafiction of Sarah Waters’s Affinity,” Neo-Victorian Studies 6, no. 1 (2013): 107.
97. John Mullan, “Cloud Atlas: The Multi-genre Novel,” The Guardian, March 26, 2005, sec. Books, www.theguardian.com/books/2005/mar/26/fiction.davidmitchell.
98. There are also other recent examples of form-jumping docufiction, such as Zinzi Clemmons, What We Lose (London: Fourth Estate, 2017); Mark Blacklock, I’m Jack (London: Granta, 2015); and narratives with two converging strands, such as Nicola Barker, H(A)PPY (Portsmouth, NH: William Heinemann, 2017).
99. A brilliant genealogy of distant reading, as a whole, is given by Underwood—a genealogy that also notes the importance of feminist literary sociology to its emergence, thereby providing an even more problematic space for my work here. See Ted Underwood, “A Genealogy of Distant Reading,” Digital Humanities Quarterly 11, no. 2 (2017): www.digitalhumanities.org/dhq/vol/11/2/000317/000317.html.
100. David Mitchell, Cloud Atlas (London: Sceptre, 2004), 528 (hereafter Cloud Atlas P); David Mitchell, Cloud Atlas (New York: Random House, 2004), 508 (hereafter Cloud Atlas E).
101. Yet, as Dennis Tenen has recently pointed out, “plain text” turns out to be less than vanilla. See Dennis Tenen, Plain Text: The Poetics of Computation (Stanford, CA: Stanford University Press, 2017), 1–7.
102. Comparative legal provisions are made in the USA’s Digital Millennium Copyright Act.
103. See Government of the United Kingdom, “Complaints to Secretary of State Under s.296ZE Under the Copyright, Designs and Patents Act 1988,” August 15, 2014, https://www.gov.uk/government/publications/complaints-to-secretary-of-state-under-s296ze-under-the-copyright-designs-and-patents-act-1988. The document shows zero successful complaints under this protocol as of 2014.
105. Theodor W. Adorno, Aesthetic Theory, trans. C. Lenhardt (London: Routledge, 1984), 447.
106. Ian Hacking, “Do We See Through a Microscope?” Pacific Philosophical Quarterly 62, no. 4 (1981): 321, https://doi.org/10.1111/j.1468-0114.1981.tb00070.x.
107. For more on interventionist epistemologies and the digital humanities see Berry and Fagerjord, Digital Humanities, 33.
108. Matthew L. Jockers, Text Analysis with R for Students of Literature (New York: Springer, 2014).
110. Caroline Edwards, “‘Strange Transactions’: Utopia, Transmigration and Time in Ghostwritten and Cloud Atlas,” in David Mitchell: Critical Essays, ed. Sarah Dillon (Canterbury: Gylphi, 2011), 178–200; Hopf, “The Stories We Tell”; O’Donnell, A Temporary Future.
111. W. Elliot and R. J. Valenza, “So Many Hardballs, so Few over the Plate,” Computers and the Humanities 36, no. 4 (2002): 455.