The Gist of Reading
Andrew Elfenbein


Chapter 1

Doing What Comes Automatically

Evolution hard-wired us to do remarkable things, and reading is not one of them. Reading arrived long after we evolved into our current form. We might have wish lists of what our brains and bodies might be if only we had evolved to read: extra hands for holding books, perfect memories, concentration less subject to distraction, eyes that could take in a larger visual field, maybe a built-in night light. But it was not to be, and we are stuck with what we have. Hands and eyes tire quickly, linguistic abilities are easily confused, memories are imperfect, attention drifts from the text at hand, and brains are happy with minimal effort.

For reading to occur, we rewire the brain from what it was built to do to what we want it to do, and we face an uphill battle. As Stanislas Dehaene puts it, “During education, reading processes must invade and ‘recycle’ cortical space devoted to evolutionarily older functions.”1 Such recycling is not in itself unusual. We often ask the brain to do what it did not evolve to do: play chess, drive a car, bake cookies. Reading stands out, though, for the density of what has to happen almost simultaneously: moving eyes to perceive symbols, assembling symbols in words, parsing words as sentences, translating sentences into a mental language, creating a mental model of what has been read, supplementing it with inferences drawn from semantic memory (typically, general factual knowledge) and episodic memory (memory for events that we have seen or experienced), finding appropriate emotional reactions to that model, reasoning or making decisions about what has been read, and much more.2 As I learn about the juggling act of reading, difficulties surprise me less than successes, especially at an early age.

The sheer improbability of ever getting reading right may explain part of its outsized role in some religious practices meant to provide meaning for human life: the ability to read taps into human potential in ways not obvious on the surface. Its importance comes from the centrality of scriptures in some religions for transmitting belief, so that reading becomes inseparable from religious experience. Secularization has had only a partial effect on disassociating reading from religion because the step from valuing reading holy books to valuing reading per se is small. Even for secular readers, reading books, at least in some circumstances, sounds like a religious experience: “Ideally, we lose ourselves in what we read, only to return to ourselves, transformed and part of a more expansive world—in short, we become more critical and more capacious in our thinking and our acting.”3 The writer (Judith Butler) secularizes the familiar Christian notion of losing oneself to gain oneself, or dying to live, and applies it not to transcendent salvation but to reading.

While reading may, at times, be such a transformative process, it is many other things first. Above all, it is something we do often. If we read well, we do so less because we are smart or motivated than because we have read over and over again. Although a print-filled environment is recent in human history, for those who live surrounded by words, to see the world is to read. Places with nothing to read may require effort to reach, and once we get to them, we (or at least I) need all our determination not to look at text. More typically, ambient text surrounds us: the eye cannot choose but read.4

So practiced is reading that most of us read automatically. I do not mean that we involuntarily drift through The Magic Mountain but that, if you are an experienced reader of English, you would find it impossible to look at a common word projected on a screen, such as blue, and not read it, within milliseconds. Certain conditions could interfere with the process: lack of light, small type, unfamiliar font, tired eyes. But with average operating conditions you will read a familiar word quickly because you have read it many times before. Not only has word recognition become automatic, but so has sentence parsing. If I flashed on a screen, one word at a time, the word list the, dog, ran, to, the, ball, you would process it differently from the word list five, over, blue, quickly, truth. With the first list, if you are an experienced reader, you will assign the words different syntactic roles as part of a sentence; the second you would treat as just a list. Syntactic parsing is so automatic that grammatically correct but deliberately tricky sentences, such as “The old man the boats,” feel irritating, as if they hardly merit the extra effort needed for them to make sense.

In using the term automatic, I am borrowing a technical, and controversial, term from psychology.5 The psychological model of automaticity that works best for findings about reading is one that Agnes Moors refers to as the “triple-mode view.” This model describes three modes of processing:

  1. Nonautomatic processing: attentive and effortful processing over which the subject has conscious control;
  2. Bottom-up automatic processing: unconscious, fast, passive processing that is inaccessible to consciousness and happens immediately after the presentation of a stimulus (in this case, text).
  3. Top-down automatic processing: processes that have become automatic as a result of training and repetitive practice. As Moors notes, such processes “are usually unconscious, but they are not inaccessible to consciousness. They can become conscious when attention is directed to them.”6

Bottom-up automatic processes in reading, such as visual perception and associative memory searches, do not belong exclusively to reading: they are so fundamental as to be building blocks of cognition. They happen at such a low level that literary scholars are hardly aware they exist—although, as I will argue, they can have interesting effects. To be considered automatic, a mental process must involve a lack of conscious effort, rapid speed, autonomy (once started, it cannot be stopped), minimal demand on processing resources, and inaccessibility to consciousness (you are not aware of the work). Top-down automatic processes, in contrast, may or may not be specific to reading, and, as Moors points out, may become accessible to consciousness. Some of these may include decoding (translating graphemes into words), parsing (making syntactic sense), comprehending (understanding the meaning of what is read), and situation model building (integrating what has been read with general world knowledge, cognitive and emotional inferences, predictions, and evaluations). A good analogy is language production: for the most part we produce responses in conversation without much conscious awareness. Yet we recognize situations when, for various reasons, we have to choose our words carefully. Reading is similar: having done it so often, we have a good set of tools for reading with minimal conscious effort. Yet every so often, those tools do not work quite as well as we want them to, so we give reading extra attention.

I focus here on automatic processes because they are least familiar to literary scholars, even though reading would be impossible without them. My goal is to show just how consequential they can be. To do so, I will present some psychological experiments in detail—maybe, for some, too much detail. Yet I want to acknowledge the practices that give rise to psychological claims rather than presenting experimental conclusions as truth. For example, in a form of what is known as masked semantic priming, participants are presented, in order, with (1) a blank screen; (2) a screen with a neutral display, such as “######”; (3) a prime, such as the word doctor; (4) the neutral display again; and (5) a target word, such as nurse. (While there are many variants of such priming, they share the research design of a prime word that has either a stronger or a weaker relation to a target.) Participants are asked to react to the target, sometimes by reading it aloud and sometimes by making a judgment about it, such as whether or not it is a real word. When the prime is related to the target word (semantically, morphologically, or orthographically), participants are faster to respond to it than if the prime does not match the target.7

This finding is not especially surprising. It seems intuitively obvious that it would be easier to read a word in capitals, for example, if you have previously seen it in lower-case letters. The surprise is that the prime is onscreen for a breathtakingly short time, as little as 43 ms (less than a twentieth of a second).8 As such, it is barely perceptible, if at all; participants have no conscious awareness of having seen it. Yet an abundance of evidence reveals that it nevertheless changes their response to the subsequent target. The fast appearance of a lexical prime produces measurable effects: we read so quickly that we can read almost without reading. It has long been known that readers skip words while reading, but masked semantic priming demonstrates that reading can occur even with minimal stimulus.

Such rapidity scales up from the word to the sentence. I have already noted that syntactic parsing happens automatically. This parsing can be measured by comprehension. Given the sentence “The child went to the store,” you could answer correctly a question like, “Where did the child go?” If asked, “What is the name of the child?,” you could recognize that this information is not in the sentence. You did more than just read one word at a time: you comprehended the sentence. Of course, if you are distracted, it is possible to read without comprehending. For example, in a famous moment in Bleak House Esther receives a newspaper and comments, “I read the words in the newspaper without knowing what they meant and found myself reading the same words repeatedly.”9 In psychological terms Esther is decoding without comprehending: she can decode even though she gets no meaning from what she has decoded. Yet experiments like the ones with masked priming suggest that, even when we hardly attend to what we read, reading still has effects not accessible to awareness. If we were to indulge the fantasy of Esther as a participant in an experiment, she would probably have speeded recognition of the material in the newspaper, even if she found herself unable to remember it.

Automatic is an adjective that the humanities loves to hate because it keeps company with other disreputable words like routine or stereotyped. At least as far back as Russian formalism, literary criticism has valued the creative imagination for disrupting mental grooves that have become automatic.10 Yet in psychological terms even the ability to disrupt routinized ways of seeing the world depends on automatic processes. A thin layer of disruption perches on a vast bedrock of automaticity. As Stephanie A. Lai et al. note, “Lack of automaticity at a lower level of processing (e.g., letter level or word level) can impede the rate of higher level processing (e.g., sentence level or text level).”11

The value of automaticity arises from a key fact about the brain: cognitive resources are limited. We can think only so much at once, so automaticity is our workaround. While limitations on cognitive resources characterize not only reading but all mental work, they matter for reading because, unless lower-level processes are automatized, they may use up the brain’s cognitive energy. Over time and with much practice processes become automatic. Tedious as acquiring automaticity may be, it has a big payoff: it allows us to work more efficiently despite the brain’s limitations. Activities that once took considerable cognitive resources no longer do. They have become effortless, although always liable to disruption under special circumstances. This acquired effortlessness frees up resources for what psychologists call controlled processing, which can enable the slow, painstaking interpretation that literary critics prize.12

While automaticity enables complex processing, it also has a possible downside. Depending on the situation, it can allow reading to go on autopilot. Readers may do just enough work to reach what feels like a satisfactory level of comprehension, a phenomenon that psychologists call “good-enough processing.”13 For the average reading experience, good-enough processing is effective for preventing the brain from being overburdened. Yet it can have some strange effects. For example, participants in an experiment by Barton and Sanford read the following passage:

There was a tourist flight traveling from Vienna to Barcelona. On the last leg of the journey, it developed engine trouble. Over the Pyrenees, the pilot started to lose control. The plane eventually crashed right on the border. Wreckage was equally strewn in France and Spain. The authorities were trying to decide where to bury the survivors.14

Readers were asked, “What should the authorities do?” and a majority (59 percent) did not notice that the passage contained a trick: survivors do not need to be buried. The general situation described made sense to these readers, and they assumed that the question would be relevant to what they had read. The combination of habit, easily accessible background knowledge, and pragmatic assumptions about relevance were enough to override the actual words on the page. Even though readers physically perceived “survivors,” they perceived it without perceiving it: its meaning did not become part of their mental representation. Their mistake was a result of the good-enough processing that characterizes much everyday reading.

The participants in this experiment were undergraduate students, and one might hope that academics would be less susceptible to such errors. But they, too, are capable of minimal processing, not only in reading but also in the even more effortful activity of writing. For example, here are a series of excerpts from academic book reviews:

This thoroughly readable handbook fills a much-needed gap in the public health nurse’s instruction.

Concise and lucid, this volume fills a much-needed gap in the literature.

This special supplement of Public Health Reports fills a much-needed gap in research on oral health care for people living with human immunodeficiency virus (HIV).15

In each case the author uses the phrase “fills a much-needed gap” as praise, indicating that there has been a gap in existing knowledge, and the book under review fills it. Yet that is not what “fills a much-needed gap” means: if the gap is much needed, it is not a good idea to fill it. Nevertheless, the phrase has become a formula in academic reviewing. As of this writing, JSTOR lists more than one hundred uses in reviews, from 1929 to 2015, and most (though not all) treat it as praise. The collocation of fills, much-needed, and gap makes it seem that the phrase means what authors want it to mean. It has the right words, and it does not seem to matter that they are not in the right places. Its familiarity as a formula may further encourage minimal processing. Authors have seen or used it before, so it must make sense.

In explaining good-enough processing, Hossein Karimi and Fernanda Ferreira posit that linguistic comprehension strives to reach cognitive equilibrium. Two potential roots for reaching it work in parallel: a heuristic root, guided by existing semantic knowledge, which “can output a quick overall representation of the information currently under processing” by applying rough rules of thumb; and an algorithmic root, guided by “strict and clear syntactic algorithms to compute precise representations for the given linguistic input.”16 Both systems might be understood as racing against each other to see which can reach cognitive equilibrium first, at which point both systems stop and move on to the next piece of language. If, in language processing, the heuristic route produces cognitive equilibrium first, then the reader may be satisfied, even if, as I have demonstrated, the cost is not quite grasping what has been read. I write “may be satisfied” because some readers, depending on their goals, may not be; not everyone thinks that survivors need burial. Yet good-enough processing is reading’s default setting, a way of comprehending that usually works, though it is capable of changing for specific occasions.

As an academic trained to value labor-intensive, time-consuming reading, I can be frustrated by how shallow earlier readers can sometimes seem, at least on the evidence of their surviving accounts. I want to say, “This is all they bothered to record?” Yet what seems to me like shallowness is only a version of what many readers do all the time. Given the circumstances that gave rise to their reading in the first place, they had become experts, through practice, at gauging just how much effort their reading needed, and they put in the right amount for their circumstances. It’s not that readers are not capable of perceiving what I regard as deeper, more complex layers in a work; rather, they often have no reason to do so, so they do not use strategies that might have led to such perceptions.

In terms of my earlier distinctions regarding automaticity, good-enough processing fits the category of top-down automaticity, one that is learned and is always subject potentially to interruption. Yet the other category of automaticity, bottom-up automaticity, also has unexpected effects on reading. In a classic experiment D. A. Swinney gave participants a passage like one of the following:

A. Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several bugs* in the corner of his room. (ambiguous word, no context)

B. Rumor had it that, for years, the government building had been plagued with problems. The man was not surprised when he found several spiders, roaches, and other bugs* in the corner of his room. (ambiguous word, with disambiguating context)

Out of context, bugs can mean many things: “a name given to various insects,” “diseases,” “defects in a machine, plan, or the like,” or “concealed microphones.” Passage A is written to make two of these meanings, “insects” and “concealed microphones,” relevant. Passage B adds four words, “spiders, roaches, and other,” to the sentence to disambiguate it, so that only the “insects” meaning is relevant.

Swinney was interested in how readers would access lexical meaning. His hypothesis seems obvious: it should be harder to access both meanings of bugs in Passage B than in Passage A because Passage B disambiguates the word. To test the hypothesis, Swinney used a common methodology in experiments, a lexical decision task. Participants listened to the passages until they came to bugs. Then, a word appeared on a computer screen, and their task was to decide whether it was a real word or nonsense. Swinney provided four possible words (called “probes”) that participants could see: ant, spy, sew, and a nonsense word. In general, participants complete a lexical decision task more quickly if a probe has already been activated in memory. So, for Passage A the assumption would be that responses to ant and spy should be faster than responses to sew because bugs, with its ambiguous meanings, should have activated meanings related to insects (ant) and to concealed microphones (spy). For Passage B the assumption was that, after bugs had been disambiguated, only the meaning related to insects (ant) should be activated, so only that meaning should have a faster response.

It’s important not to get too hung up on the details of a particular example: Swinney gave his participants thirty-six different passages, all following the same format as above. What matters are his results. The average results for passages like Passage A were exactly as expected: participants performed the lexical decision task more quickly when they saw probes related to different possible meanings of a word like bugs than they did when the probes were words unrelated to “bugs” or nonsense words. The surprising finding came from the average results for responses to passages like Passage B, which disambiguated bugs. As expected, participants quickly responded to ant after reading bugs. But they responded almost as quickly to spy, even after the passage made it clear that the “concealed microphone” meaning was not relevant. (As in Passage A, the unrelated word and the nonsense word received slower responses.)

Here is the point: both possible meanings of bugs were active in readers’ minds even after the word had been disambiguated.17 This is a strange finding, though one that has been replicated many times, including replications where the participants read the ambiguous sentences rather than just hearing them.18 Just reading a word with multiple meanings is enough to activate those meanings, even if sentence context makes unmistakable that only one meaning is relevant. Admittedly, such activation does not last long. Subsequent research manipulated how long the probe word appeared after participants read the ambiguous word (in technical terms, the “stimulus offset asynchrony”). If the probe appeared very soon after the ambiguous word, then both possible meanings of the word were activated. But if the probe appeared after only a short delay, readers responded more quickly only to the contextually appropriate associated word.19

When a reader confronts an ambiguous word, two actions occur: a spread of activation to all possible semantic associations of that word, followed by a rapid inhibition of contextually inappropriate meanings and a narrowing to the most appropriate meaning.20 Psychologists argue about exactly how this activation and inhibition happen, but most of their models have common features. Word meanings are stored in memory not as isolated monads but as a network of associations. All meanings have relationships of different strengths to other meanings; these strengths are created by co-occurrence and are constantly shifting. The more strongly a word meaning becomes associated with one set of other meanings, the weaker its associations become to other possible meanings. When you read a word, the most strongly activated meanings are those that, in your experience, have been most associated with that word.21

For a literary critic, what may be most striking about this research is its organization in polarities: a meaning is dominant or subordinate; a meaning is ambiguous or not; a possible meaning is inhibited, or it is not; cognitive equilibrium is attained, or it is not. Literary works, especially postromantic lyric poetry, live in spaces between those polarities, where it is not always so easy to tell what meaning is supposed to be dominant and just when (or if) a word has been disambiguated. To take an example from Dickinson:

Through the strait pass of suffering—
The Martyrs—even—trod.
Their feet—upon Temptation—
Their faces—upon God—22

If we imagine a reader coming to pass in the first line in the way that a reader comes to bugs in Swinney’s passages, multiple possible meanings might be activated, some more anachronistic than others: “a paper granting permission to travel,” “a successful grade,” “an abstention,” or “a passageway.” By the end of the sentence, the verb trod points to “passageway” as the meaning most reinforced by other words, but (in terms of the automatic processes of reading) this disambiguation happens long after the word appeared. Although psychological work accounts for inhibiting contextually wrong meanings, it takes so long in Dickinson for the contextually right meaning to appear that inhibition cannot work the way it might in the lab. In addition, strait for Dickinson means “narrow,” but some readers may assume it is just a misspelled “straight”; even readers who know better may not be able to suppress “unbending” as a meaning for strait, especially as it may seem contextually right. Dickinson’s poetry thrives by sustaining the lexical ambiguities that we usually inhibit, as responses to Swinney’s work demonstrate.

The second line is even more ambiguous because of Dickinson’s characteristic ellipsis; it could be parsed as “Even the martyrs trod [through the strait pass]” or “The martyrs trod evenly.” Trod helps to disambiguate pass in line 1 (as does “through the strait”), but even could be an adjective modifying “the martyrs,” or a reduced form of the adverb evenly, modifying trod. In the last two lines a word that is not usually ambiguous, upon, becomes so because parallel structure sets up what looks like a syntactic repetition. But the upon of “Their feet—upon Temptation” is not the upon of “Their eyes—upon God”: the first upon means “on top of,” while the second means “toward.” Such distinctions are easy to see once the sentence is finished, but, while reading, both meanings may be activated. Syntactic parallelism complicates the context that would usually disambiguate the word by underscoring identity, not difference.

For skilled readers of poetry, what may happen to automatic processes when faced with passages like this one is the controlled inhibition of automatic inhibition. In Dickinson’s poetry and lyrics like hers, the factors that should enable inhibition are absent, muted, or delayed. This inhibition of inhibition, a cognitive double negative, prevents or at least slows the usual rapid narrowing of meaning. What happens next depends on the reader. Swinney’s experiments and others like it do not show that readers consciously become aware of all the possible word meanings. Activated meanings become explicit only after they have gained enough strength, from successive activations, to rise above a critical threshold. Nothing guarantees that a reader would be aware of the many possible meanings of pass that I listed above, for example. Some readers, faced with Dickinson-like ambiguity, might fall back on good-enough processing by piecing together a bare sense, regardless of syntactic or semantic complexities (e.g., “martyrs are walking”). Cognitive equilibrium for such readers may be satisfied with a fuzzier level of comprehension than would be appropriate for other genres.

Others might wrestle with the difficulties and “solve” them by arriving at what they perceive to be a dominant meaning. Still others, especially those with academic expertise in reading lyric poetry, might exploit the spread of activation to multiple meanings and try to capture as many as possible, though such a process would be effortful and time-consuming. The inhibition of inhibition makes available a potential richness of semantic meaning that ordinarily would be just confusing, but readers do not necessarily respond to it explicitly. Yet even if they do not, they may nevertheless be aware of a felt difference in reading poetry, a perception of increased richness in semantic content that does not rise to paraphrasable meaning but lurks at the edge of consciousness as an awareness of language grown unexpectedly dense.

Swinney’s experiment focuses on single words, but bottom-up automatic processes can affect narrative comprehension also. Here I turn to the work of Ed O’Brien and his collaborators. They presented participants with stories like the following:

Introduction. Bill had always enjoyed walking in the early morning, and this morning was no exception. During his walks, he would stop to talk with some of his neighbors.

Consistent elaboration. Bill had just celebrated his twenty-fifth birthday. He felt he was in top condition, and he worked hard to maintain it. In fact, he began doing additional workouts before and after his walks. He could now complete a 3-mile run with hardly any effort.

Inconsistent elaboration. Bill had just celebrated his eighty-first birthday. He didn’t feel as strong as he was twenty years ago. In fact, Bill began using a cane as he hobbled along on his morning walks. He could not walk around the block without taking numerous breaks.

Filler. Today, Bill stopped to talk with Mrs. Jones. They had been friends for quite some time. They were talking about how hot it had been. For the past three months there had been record-breaking high temperatures and no rain. Soon there would be mandatory water rationing. As Bill was talking to Mrs. Jones, he saw a young boy who was lying in the street hurt.

Target sentences. He quickly ran and picked the boy up. Bill carried the boy over to the curb.

Closing. While Bill helped the boy, Mrs. Jones ran into her house to call the boy’s mother and an ambulance. He kept the boy calm and still until help arrived.23

Participants read either the consistent or the inconsistent elaboration but not both; all participants read the introduction, filler, target sentences, and conclusion. In the consistent elaboration it comes as no surprise that Bill is able to run quickly to the boy in the street. In the inconsistent elaboration, Bill’s ability to run quickly to the boy counters the information we have been given about him. O’Brien et al. found that readers who read the inconsistent elaboration slowed down when they came to the target sentences, and the researchers concluded that this slowdown proved that readers were keeping track of Bill as a character. Knowing that he was old and infirm made it difficult to integrate the information that he ran quickly to help the boy, and readers consequently slowed. Some literary critics might object that the text does not have to be as inconsistent as O’Brien and his collaborators claim: one could imagine that the emergency led Bill to overcome his frailty and manage a burst of speed. Yet even if we grant that option, creating such consistency still needs an extra step by the reader, an inference about Bill’s emergency abilities, that requires more reading time.

This finding was not quite as obvious as it might initially seem because of the presence of the filler passage. It guaranteed that the information about Bill’s physical condition was no longer in what psychologists call “working memory,” defined as “a temporary storage system under attentional control that underpins our capacity for complex thought.”24 Working memory capacity is not large, though it varies from individual to individual; as Marcel Just and Patricia Carpenter have demonstrated, differences in working memory capacity explain major differences in reader behavior with regard to syntactic processing and ambiguity resolution.25 O’Brien and his collaborators wrote their passages so that when readers reached the target sentences, Bill’s physical condition was no longer in focus and would have passed out of immediate working memory. Consequently, the slowdown had to arise from the reactivation of material in long-term memory. It cannot be taken for granted that readers will reactivate relevant textual material from long-term memory (they often do nothing of the sort), so the finding that they did so in the case of an inconsistency was important.26

Nevertheless, the finding still seems unexciting, an elaborate setup to prove common sense. But psychologists like to probe what seems obvious. In this case, what exactly does it mean that readers were “keeping track” of Bill? Presumably, it meant that they created a mental model of him as a character, and when they read text that was inconsistent with that model, they found it hard to integrate. So, O’Brien and his colleagues further probed how readers kept track of Bill by adding another condition, which they called the “qualified elaboration” condition. Participants in this condition read the same stories as above, except that instead of the consistent or inconsistent elaboration, they read passages like this:

Qualified elaboration. Bill had just celebrated his eighty-first birthday. He didn’t feel as strong as he was twenty years ago. In fact, Bill began using a cane as he hobbled along on his morning walks. He could not walk around the block without taking numerous breaks. Although he was old, he could still engage in feats of strength in emergency situations.

Much of this passage is identical to the inconsistent elaboration passage, in that it stresses Bill’s physical weakness. But in the last sentence, this information receives an important qualification: Bill can still move quickly when he has to do so. The inference described above now becomes explicit in the qualified elaboration condition. This information makes Bill’s ability to help the child no longer inconsistent, so readers who are keeping track of Bill now ought to have no trouble with the target sentences. The inconsistency that caused the slowdown has been eliminated. Indeed, the description of Bill’s ability in emergency situations might lead readers to expect that they will read about just such a situation, in which case they should read the target sentences with no slowdown.

But, as it turns out, readers slow down anyway after reading the qualified elaboration, though not as much as they did for the inconsistent elaboration. Admittedly, differences between the reading times for the target sentences in the consistent, inconsistent, and qualified conditions are small, under three hundred milliseconds. But with processes that happen quickly, such as spread of activation, small differences may have big implications, just as, in traditional close reading, tiny nuances in sound or word order are understood to carry major weight. In this case outdated information about Bill’s abilities continued to affect reading, possibly because the text did not encourage readers to outdate it completely. Just as in Swinney’s experiment, in which possible meanings of an ambiguous word were activated even after the word had been disambiguated, so in O’Brien’s stories (and the story about Bill was only one of many read by participants), material about a character that should no longer matter still slowed reading.

O’Brien and colleagues argue that such results arise from “memory-based processing,” which assumes that ordinary memory processes underlie reading comprehension. As Anne Cook and O’Brien note, “Substantial research has provided evidence for three critical characteristics of this activation process; it is passive, dumb, and unrestricted. It is passive in that it occurs without conscious or strategic effort on the part of the reader. It is dumb because information resonates (and is activated) simply on the basis of featural overlap, without regard to whether it is relevant or appropriate with respect to the current discourse model. Finally, the activation mechanism is unrestricted: The signal has the potential to contact related information from either the episodic representation of the text or general world knowledge.”27 In this case the memory processes they are describing start when readers read the target sentences, “He quickly ran and picked the boy up. Bill carried the boy over to the curb.” Reading information about Bill running would cause a passive activation of previous information about Bill. It is “dumb” because the information activated is not selective or controlled: it is based solely on featural overlap (how much what you are now reading resembles what you have read or what you know about already) between the description of Bill’s current physical state and his previous physical one. And it is “unrestricted” because it draws on information from the text (about Bill) and more general world information, such as knowing that people who are not in good physical shape will probably not run quickly.

After this activation the next stage would be an “integration” stage in which readers would integrate the most strongly activated material into a developing memory representation.28 In the consistent condition such integration is easy, so readers do not slow down. In the inconsistent condition such integration is difficult, and readers do slow down. In the qualified condition such integration is more difficult than it should be. Even though, according to the narrative, readers have all the information they need to understand how Bill helped the boy, passive, dumb, and unrestricted ordinary memory processes activate outdated information about Bill. It slows down integration, though not as much as inconsistent information does.

Given that memory may retrieve irrelevant information, readers might constantly be overwhelmed by all the irrelevance. Yet boundary conditions govern the strength of activation: featural overlap; how much the existing material has been elaborated (it is easier to activate material discussed at length than material mentioned only in passing); how far away in time the activated material is from what is being read; and the causal relatedness of what is being read to what is activated in memory (the inconsistency effect vanishes if a story gives an explicit cause for inconsistent behavior).29

Why should literary scholars care about the finding that outdated textual information affects reading, beyond the explicit control of the reader? This is a traditional question in older literary criticism about narratives focused on character development. Since the criterion for demonstrating that a character was round as opposed to flat was the capacity for change, critics debated how much Emma Woodhouse or David Copperfield really changed. Findings from psychology suggest that the more relevant question is not whether characters can change, but whether readers can. If automatic processes continue to activate outdated information when we read, then no matter how hard a character may work to change, a reader may have trouble fully letting go of earlier phases of a character’s development. Those earlier phases may be especially prominent if they have received much elaboration, as they often do in the nineteenth-century realist tradition, and if the causes for the transformation are not compelling.

At times it seems as if authors count on the automatic durability of outdated information. Classic detective fiction, insofar as it elaborates red herring plots and often gives little space to the real solution of the crime, may cause readers to connect characters with guilt even after they are exonerated. Rachel Verinder in Wilkie Collins’s The Moonstone, for example, is associated with the theft of the diamond for so long that she never quite escapes blame. Once her ties to possible guilt have surfaced, readers’ bottom-up memory processes may reactivate them whenever she reappears. At the same time, as I have noted, various factors may weaken such associations, and activation does not necessarily equal awareness. It may be enough, though, for the automatic influence of outdated information to make it seem as if crime taints multiple characters, even if only one is ultimately guilty.30

An experimental variation by Jason Albrecht and Jerome Myers on the inconsistency effect further illuminates the role of automatic memory processes. The basic setup was simple: participants read a story in which Mary needed to make an airline reservation. In the “satisfied goal” version of the story, she made the reservation and then worked on creating an advertisement for her job; in the “unsatisfied goal” version of the story, she was about to make the reservation but suddenly had to work on the advertisement instead. After reading a filler passage about creating the advertisement, participants were tested on how long it took them to read target sentences near the end of the story: “She was tired and decided to go to bed. She put on pajamas and washed her face.” Participants in the “unsatisfied” goal version should take longer to read the target sentences if they realized that Mary was going to sleep without having made the reservation.

As it turned out, they did just that—but only in some cases. The intriguing twist in this experiment manipulated Mary’s setting. In all versions participants read that Mary, preparing to make her reservation, “sat down on her leather sofa and looked through the telephone book.” What differed was a later sentence at the end of the filler, right before the target sentences about Mary getting ready for bed. In one version the sentence read, “Exhausted, Mary sat down on the leather sofa for a moment”; in the other, “Exhausted, Mary sat down for a moment.”31 The first version repeated the phrase “leather sofa” from earlier in the story; the second did not. Participants in the unsatisfied goal condition slowed down on the target sentences only when they read the filler with the phrase “leather sofa.” Readers reinstated Mary’s unsatisfied goal only when the contextual cue “leather sofa” was repeated; without it readers did not notice any problems. Repeating a seemingly trivial detail, “leather sofa,” activated other earlier information.

For those interested in the nineteenth-century novel, this experiment is compelling for what it says about metonymy, which Roman Jakobson famously characterized as realism’s central figure: “Following the path of contiguous relationships, the Realist author metonymically digresses from the plot to the atmosphere and from the characters to the setting in space and time.”32 Whereas Jakobson emphasized the importance of metonymy for the author, Albrecht and Myers show its importance for readers. On the surface, “leather sofa” barely qualifies as a metonymy for Mary because its role in the story is so small; it looks trivial, exactly the kind of information that most readers would not recall if asked. Yet, whether readers are aware of it or not, “leather sofa” works as a metonymy for Mary because, unimportant though it may be, its mere reappearance can reinstate important information about Mary (whether or not she accomplished her goal).

Rich pile-ups of metonymic details in realistic novels do more than just create descriptive vividness. They enable a continual succession of complex memory cues. The point is not simply a repetition effect (readers recognize the same metonymy that they have seen before). Instead, a metonymy resonates with accumulated associations. These may facilitate readers’ quick access to intangible aspects of character that, on the surface, have little to do with the metonymy itself, just as “leather sofa” reinstated Mary’s goal of making an airline reservation without telling us much about Mary.

While automatic processes affect the reading of words, as in Swinney’s experiment, and stories, as in the inconsistency experiments, they also work at the sublexical level. Brooke Lea and his collaborators (of whom I was one) investigated automatic processing in poetry by examining alliteration: can repeating a phoneme activate earlier text? Lea et al. asked participants to read aloud blank verse passages like the following, adapted from William Carlos Williams’s “Spring and All”:

. . . Beyond, the
waste of broad, muddy fields
brown with dried weeds, standing and fallen

patches of standing water
the scattering of tall trees

Target Lines:
No alliteration: All along the creek-winding road, past Stuart’s barn,
Different alliteration: All along the raw and rutted road the reddish barn,
Same alliteration: All along the way-winding road, wary whispers of the old barn,
. . .

All about them
the cold, familiar wind—
Now the grass, tomorrow
the wooden willowy warp of wildcarrot ^ leaf {recognition probe: BARN}

Readers read only one version of the target line per poem but, over the course of the whole experiment, would have read an equal number of no alliteration, different alliteration, and same alliteration lines across all the poems. In this example the word that readers are asked to recognize is BARN, which appeared onscreen after the word wildcarrot in a line that alliterates prominently on /w/. The word barn appears in all the target lines, but the “no alliteration” line does not alliterate; the “different alliteration” line alliterates on /r/, not /w/; and only the “same alliteration” line alliterates on /w/, the same phoneme as the line containing the recognition probe task. A few important notes about this experiment: this example was only one of many texts that readers read, and they also read several filler texts that did not contain any alliteration to mask the purpose of the experiment. Moreover, the probe word did not always appear near the end of the line, as it does in the example. Poems were counterbalanced so that the probes were either in the first, middle, or final third of the line, so activation did not depend on where in the line they appeared.

Readers recognized that barn had appeared earlier in the poem more quickly when alliterations matched. Lea et al. reproduced these findings when participants read the poems silently, so effects did not depend on hearing, and also when the alliterating lines appeared in prose, so the effects did not depend on possible extra effort used to read poetry. Once again, low-level, automatic effects of which readers have no conscious awareness produced measurable differences in response. What is most striking about these findings is that the probe word (barn) did not share the alliteration on /w/ between the target line and the cue line. It’s easy to believe that alliteration helps memory because of a repeated sound. But Lea et al. showed that alliteration spreads activation to a bigger neighborhood of words.33 Alliteration functioned in a roughly analogous way to “leather sofa” in the metonymy experiment.

Lea at al. extended this investigation to look at the mnemonic effects of rhyme. Participants read poetry in rhyming couplets. As in the alliteration experiment, they stopped at a certain point to verify if a particular word (the probe) had appeared earlier. The probe task appeared after the subjects had read a couplet rhyming on a particular phoneme. The varying conditions were a different-rhyme condition and a same-rhyme one. As in the alliteration experiment, participants recognized the probe word more quickly when it appeared in a couplet that rhymed on the same phoneme as the couplet in which the recognition task appeared.

In an interesting twist, Lea et al. varied where the probe appeared: before or after the rhyming word. Novice readers (undergraduates) recognized the probe word more quickly only when it appeared after the rhyming couplet and only if that couplet matched the rhyme of the earlier couplet with the probe word. Expert readers (MFA students, practicing poets, and rap artists), however, had a different result. They recognized the probe word more quickly when it appeared before or after the completion of the rhyming couplet. Anticipation of rhyme was enough to facilitate activation of previous words, even before the couplet was complete.34

Automatic memory processes thus matter even on a sublexical level. While this finding may seem interesting to psychologists who care about memory and language, how does it matter to literary scholars? As I have stressed, memory activation is necessary but not sufficient for awareness. It is tempting, but wrong, to believe that alliteration and rhyme make readers conscious of words in previously alliterating or rhyming lines. In the example above, readers would not become aware of barn unless they were probed for it. Moreover, conditions favoring activation in these experiments are odd: two alliterating lines in poems otherwise devoid of salient alliteration and, slightly less unusual, two couplets rhyming on the same phoneme that are close but not adjacent.

Nevertheless, the effects detected by Lea et al. are suggestive more about poetry as experience than as meaning. Increased activation from these schemes adds a potential layer of memory activity to ordinary reading. Such activity could create a heightened textural density in reading, a sensation that does not produce paraphrasable meaning but a phenomenological feeling. In his famous essay on Tennyson, Arthur Henry Hallam describes this experience when he likens poetry to magic: “We are therefore decidedly of the opinion that the heights and depths of art are most within the reach of those who have received from Nature the ‘fearful and wonderful’ constitution we have described, whose poetry is a sort of magic, producing a number of impressions, too multiplied, too minute, and too diversified to allow of our tracing them to their causes, because just such was the effect, even so boundless and so bewildering, produced on their imaginations by the real appearance of Nature.”35 I am most struck by Hallam’s association of poetry’s magic with “impressions too multiplied, too minute, and too diversified to allow of our tracing them to their causes.” Although Hallam uses impressions, which has a technical history in British empiricism, and I use activation, which has an equally technical history in psychology, we are both battling the linearity of writing to capture the nonlinear, weblike experience of reading. This may be why poetry that avoids such devices can feel comparatively barren, however difficult its words or syntax may be. An implicit hum of mental activity fueled by automatic memory processes fades to silence without the stimulation of familiar poetic schemes.

No one wants writing like Hallam’s in academic discourse anymore because it seems too vague. Yet his account captures something important about the aesthetic feel of poetry not available to better disciplined critics. The scientific findings I have discussed illuminate such intuitive feelings in ways that more academically respectable readings do not. A long-standing objection to science is that its cold, emotionless tools do not capture the lived experience of art. Yet, at least for reading, the opposite may be true. Disciplined literary criticism, for all its elevation of subtlety and inflection, has more difficulty encompassing Hallam’s “sort of magic” than does the science of memory.

Thus far I have argued that automatic, implicit operations of memory activate even more material than we realize. Yet another research stream paints a different image, in which reading fiction can make readers uncertain about what they have known for years.36 Participants took an online survey of sixty-four short-answer questions about general world knowledge. Experimenters then manipulated thirty-two items from these and inserted them in two stories that participants read, two weeks after they completed the survey. Of the thirty-two, they invented sixteen false statements based on them, and put eight in each story. For example, for the question “What is the largest ocean in the world?” the corresponding false statement mentioned the Indian Ocean as the largest ocean in the world rather than the Pacific. The experiments used the other sixteen items to create neutral references, such as mentioning “the largest ocean in the world” (without naming a particular ocean). Each story also had eight true statements unrelated to the previous survey, so each story had eight false, eight neutral, and eight true statements. Before reading the stories, participants were told that the stories were fictional and might contain inaccurate information. After reading the stories, participants completed a brief filler task and then took another survey of general knowledge. In it, thirty-two questions were new and thirty-two repeated the questions that had been used to create the incorrect and neutral statements in the stories.

Experimenters found that “reading stories containing misinformation led participants to reproduce factual inaccuracies that contradicted their previously demonstrated knowledge.”37 After reading two very short stories (fourteen hundred words each), readers answered incorrectly questions about facts that, two weeks before, they had gotten right. Many similar experiments have shown that this “misinformation effect” is stubborn.38 Readers use information that they have learned in fiction to complete later, unrelated tasks, including problem solving and decision making, and they use this information whether or not it is correct.39

If the prior experiments discussed in this chapter have demonstrated memory’s durability, these experiments do the opposite. In them, memory seems fragile in the face of fiction’s seductions because fiction so easily leads participants to counter what they already know. Even worse, as David Rapp argues, this fragility is “an ordinary consequence of the mechanisms that underlie memory, problem-solving, and comprehension”: readers rely on easily accessed memories; encoded information is usually not completely overwritten by new information; and readers often do not connect a piece of information to the reliability of its source.40

For those interested in nineteenth-century literature, such work sheds light on the otherwise mystifying investment in factual accuracy by the period’s authors and reviewers. For example, in the preface to Bleak House, Dickens, out of all possible topics he could have discussed, insists on the truth of his depictions of Chancery and spontaneous combustion.41 For academic readers this preface is a letdown because the truth of spontaneous combustion feels irrelevant. Krook’s death fits with the novel’s images of inner rot and decay so well that its conformity to fact hardly matters. But its truthfulness mattered for Dickens, and he and other Victorians may deserve more credit than we have given them. Dickensian caricature can take care of itself, but he worries about the danger that readers might be led astray by the fiction’s ability to present a false knowledge as if it were true. Getting facts right matters as much to writers like Dickens as conveying larger messages because of an ethical sensitivity to fiction’s possible effectiveness as a vehicle for untruth.

The story I have told has two sides. Memory holds on to information and reactivates it quickly, but it can also give up easily on what it should retain. Both possibilities stem from the same phenomenon: ease of access. In the case of the Bill story, information about Bill, relevant or not, is reactivated by Bill’s reappearance in the narrative and is enough to slow readers, even after it has been outdated. With false facts the task changes how memory works. Rather than doing word-recognition tasks or having reading times recorded, participants perform a (comparatively) effortful memory search on a test. They retrieve the material in the story because they read it recently, and the ease of this retrieval trumps long-term knowledge of correct facts.

The bigger point: much of what happens during reading is not accessible to consciousness, and that is a good thing. If it were, we would not finish a paragraph. Yet inaccessibility to consciousness does not make automaticity irrelevant. On the contrary, only when we recognize it as a default mode do the disruptions on which literariness are supposed to depend make sense: we notice these disruptions automatically, though how and if we choose to make sense of them belong to conscious awareness. Rather than prize disruptions as the core of literariness, as literary critics since Shklovsky have done, I would rather notice the constant modulations of attention that literary reading assumes, modulations so practiced that they occur under the threshold.42

Only through automaticity can readers enter literary time, when they may spend hours, days, and weeks in the company of an author or set of characters. During that time, automatic processes often run well, even in the most difficult of texts. We make necessary connections with background knowledge, inhibit irrelevant associations, differentially allocate attention, and balance reading’s demands against multiple, incessant other demands that even the quietest of environments make on our easily distracted minds. Even more, automaticity gives us memory activations in our vast semantic network for free, even though most of them are quickly inhibited.

Although literary scholars like to imagine literature as an encounter with the new and unfamiliar, that encounter presupposes the smooth working of the old and habitual. The cost of that smoothness is that it can work too well: we miss much that we later realize is important, misunderstand passages entirely, uncritically accept information that we ought to doubt. Yet automaticity’s pitfalls might also have benefits: concentrating on some passages as being especially dense or provocative requires the ability to pass over others with less effort.43 As we read, we are hindered and helped by our memory, missing or even misunderstanding much of what we read as we, almost without effort, turn the linear presentation of text into a thick web of felt experience.


1. Stanislas Dehaene et al., “How Learning to Read Changes the Cortical Networks for Vision and Language,” Science 330, no. 6009 (2010): 1359–64, 1359.

2. For an overview of these processes see Robert A. Mason and Marcel Adam Just, “Identifying Component Discourse Processes from the fMRI Time Course Signatures,” in Reading—From Words to Multiple Texts, ed. M. Anne Britt, Susan R. Goldman, and Jean-François Rouet (New York: Routledge, 2013), 147–59.

3. Judith Butler, from her “Commencement Address at McGill University, 2013,” excerpts online at Brainpickings,

4. For examples of how psychologists examine such effects, see Colin M. MacLeod, “Half a Century of Research on the Stroop Effect: An Integrative Review,” Psychological Bulletin 109, no. 2 (1991): 163–203.

5. On automaticity and preattentive processing see Anne Treisman, Alfred Viera, and Amy Hayes, “Automaticity and Preattentive Processing,” American Journal of Psychology 105, no. 2 (1992): 341–62; on the importance of automaticity in learning to read see S. Jay Samuels and Richard F. Flor, “The Importance of Automaticity for Developing Expertise in Reading,” Reading and Writing Quarterly 13, no. 2 (1997): 107–21; and Stephanie A. Lai, Rebekah George Benjamin, Paula J. Schwanenflugel, and Melanie R. Kuhn, “The Longitudinal Relationship Between Reading Fluency and Reading Comprehension in Second-Grade Children,” Reading and Writing Quarterly 30, no. 2 (2014): 116–38; for a traditional view of reading and automaticity see Gordon D. Logan, “Automaticity and Reading: Perspectives from the Instance Theory of Automatization,” Reading and Writing Quarterly 13, no. 2 (1997): 123–46; and Charles Perfetti, “Reading Ability: Lexical Quality to Comprehension,” Scientific Studies of Reading 11, no. 4 (2007): 357–83; and for more recent important work see Katherine A. Rawson and Erica L. Middleton, “Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension,” Journal of Experimental Psychology: Learning, Memory, and Cognition 35, no. 2 (2009): 353–70.

6. Agnes Moors, “Automaticity,” in The Oxford Handbook of Cognitive Psychology, ed. Daniel Reisberg (Oxford: Oxford University Press, 2013), 163–75, 169; I draw on Moors’s discussion extensively in this paragraph.

7. On semantic priming see Eva Van Den Bussche, Wim Van Den Noortgate, and Bert Reynvoet, “Mechanisms of Masked Priming: A Meta-analysis,” Psychological Bulletin 135, no. 3 (2009): 452–77; and Simon van Gaal et al., “Can the Meaning of Multiple Words Be Integrated Unconsciously?” Philosophical Transactions of the Royal Society B (Biological Sciences) 369 (2014):; on morphological priming see Joanna Morris and Linnaea Stockall, “Early, Equivalent ERP Masked Priming Effects for Regular and Irregular Morphology,” Brain and Language 123, no. 2 (2012): 81–92; on orthographic and phonological priming see Johannes C. Ziegler, Daisy Bertrand, Bernard Lété, and Jonathan Grainger, “Orthographic and Phonological Contributions to Reading Development: Tracking Developmental Trajectories Using Masked Priming,” Developmental Psychology 50, no. 4 (2014): 1026–36. For a helpful online demonstration see

8. Matthew J. Traxler, Introduction to Psycholinguistics: Understanding Language Science (Chichester, West Sussex: John Wiley and Sons, 2012), 104.

9. Charles Dickens, Bleak House (1852–53), ed. Stephen Gill (Oxford: Oxford University Press, 2008), 39.

10. For the classic discussion see Victor Shklovsky, “Art as Technique,” in Russian Formalist Criticism: Four Essays, trans. Lee T. Lemon and Marion J. Reis (Lincoln: University of Nebraska Press, 1965), 3–24.

11. Lai et al., “The Longitudinal Relationship,” 119.

12. Moors, “Automaticity,” 165–67.

13. Fernanda Ferreira, Karl G. D. Bailey, and Vittoria Ferraro, “Good-Enough Representations in Language Comprehension,” Current Directions in Psychological Science 11, no. 1 (2002): 11–15; and Hossein Karimi and Fernanda Ferreira, “Good-Enough Linguistic Representations and Online Cognitive Equilibrium in Language Processing,” Quarterly Journal of Experimental Psychology 69, no. 5 (2016): 1013–40.

14. Sanford B. Barton and Anthony J. Sanford, “A Case Study of Anomaly Detection: Shallow Semantic Processing and Cohesion Establishment,” Memory and Cognition 21, no. 4 (1993): 477–87, 479; see also the discussion of shallow processing in Sanford and Emmott, Mind, Brain, and Narrative, 103–9.

15. Myrtle Whitlock Martin, review of Handbook on Tuberculosis for Public Health Nurses, by Violet H. Hodgson, American Journal of Nursing 40, no. 4 (1940): 488; B. W. Hodder, review of Industrialization in West Africa, by J. O. C. Onyemelukwe, Geographical Journal 152, no. 2 (1986): 264; Regina M. Benjamin, “Oral Health Care for People Living with HIV/AIDS,” Public Health Reports 127, suppl. 2 (2012): 1–2, 1.

16. Karimi and Ferreira, “Good-Enough Linguistic Representations,” 1014. The description has close parallels with the account of decision making described by Daniel Kahneman in Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 59–70.

17. David A. Swinney, “Lexical Access During Sentence Comprehension: (Re)consideration of Context Effects,” Journal of Verbal Learning and Verbal Behavior 18, no. 6 (1979): 645–59.

18. For examples of similar effects obtained with visual presentation, see Walter Kintsch and Ernest F. Mross, “Context Effects in Word Identification,” Journal of Memory and Language 24, no. 3 (1985): 336–49; Robert E. Till, Ernest F. Mross, and Walter Kintsch, “Time Course of Priming for Associate and Inference Words in a Discourse Context,” Memory and Cognition 16, no. 4 (1988): 283–98; and Kerrie E. Elston-Güttler and Angela D. Friederici, “Native and L2 Processing of Homonyms in Sentential Context,” Journal of Memory and Language 52, no. 2 (2005): 256–83.

19. Mark S. Seidenberg, Michael K. Tanenhaus, James M. Leiman, and Marie Bienkowski, “Automatic Access of the Meanings of Ambiguous Words in Context: Some Limitations of Knowledge-Based Processing,” Cognitive Psychology 14, no. 4 (1982): 489–537.

20. On inhibitory control of attention and its importance for reading see Penny Chiappe, Lynn Hasher, and Linda S. Siegel, “Working Memory, Inhibitory Control, and Reading Disability,” Memory and Cognition 28, no. 1 (2000): 8–17.

21. A classic statement of this model is Allan M. Collins and Elizabeth F. Loftus, “A Spreading-Activation Theory of Semantic Processing,” Psychological Review 82, no. 6 (1975): 407–28.

22. “Through the strait pass of suffering” (792), in Final Harvest: Emily Dickinson’s Poems, ed. Thomas H. Johnson (Boston: Little, Brown, 1961), 197.

23. Edward J. O’Brien, Michelle Rizzella, Jason E. Albrecht, and Jennifer G. Halleran, “Updating a Situation Model: A Memory-Based Text Processing View,” Journal of Experimental Psychology: Learning, Memory, and Cognition 24, no. 5 (1998): 1200–1210, 1210. O’Brien has generously made all the passages in his 1998 experiment available on his web page at

24. Alan Baddeley, Working Memory, Thought, and Action (Oxford: Oxford University Press, 2007), 1.

25. Marcel Adam Just and Patricia A. Carpenter, “A Capacity Theory of Comprehension: Individual Differences in Working Memory,” Psychological Review 99, no. 1 (1992): 122–49.

26. O’Brien et al., “Updating a Situation Model.” Important antecedents include Jason E. Albrecht and Jerome L. Myers, “Role of Context in Accessing Distant Information During Reading,” Journal of Experimental Psychology: Learning, Memory, and Cognition 21, no. 6 (1995): 1459–68; and Jason E. Albrecht and Edward J. O’Brien, “Updating a Mental Model: Maintaining Both Local and Global Coherence,” Journal of Experimental Psychology: Learning, Memory, and Cognition 19, no. 5 (1993): 1061–70.

27. Anne E. Cook and Edward J. O’Brien, “Knowledge Activation, Integration, and Validation During Narrative Text Comprehension,” Discourse Processes 51, no. 1–2 (2014): 26–49, 27.

28. Here Cook and O’Brien draw on Walter Kintsch’s Comprehension: A Paradigm for Cognition (Cambridge: Cambridge University Press, 1998), 96–101.

29. Panayiota Kendeou, Emily R. Smith, and Edward J. O’Brien, “Updating During Reading Comprehension: Why Causality Matters,” Journal of Experimental Psychology: Learning, Memory, and Cognition 39, no. 3 (2013): 854–65.

30. For a classic discussion of how psychologists understand their treatment of linguistic memory as differing from a purely mechanistic one, see James J. Jenkins, “Remember That Old Theory of Memory? Well, Forget It,” American Psychologist 29, no. 11 (1974): 785–95.

31. Albrecht and Myers, “Role of Context,” 1468.

32. Roman Jakobson, “Two Aspects of Language,” in Language in Literature, ed. Krystyna Pomorska and Stephen Rudy (Cambridge, MA: Harvard University Press, 1987), 95–114, 111.

33. R. Brooke Lea, David Rapp, Andrew Elfenbein, Aaron D. Mitchel, and Russell Swinburne Romine, “Sweet Silent Thought: Alliteration and Resonance in Poetry Comprehension,” Psychological Science 19, no. 7 (2008): 709–16.

34. R. Brooke Lea, Chelsea Voskuilen, and Andrew Elfenbein, “Rhyme as Memory Cue: Do Poets Resonate?” poster at the 2010 Society for Text and Discourse, Chicago, Illinois. For more on rhyme’s psychological effects see Matthew S. McGlone and Jessica Tofighbakhsh, “Birds of a Feather Flock Conjointly (?): Rhyme as Reason in Aphorisms,” Psychological Science 11, no. 5 (2000): 424–28.

35. Arthur Henry Hallam, “On Some of the Characteristics of Modern Poetry, and on the Lyrical Poems of Alfred Tennyson,” Englishman’s Magazine 1 (1831): 616–28, 618.

36. For the pioneering article in this field see Elizabeth J. Marsh and Lisa K. Fazio, “Learning Errors from Fiction: Difficulties in Reducing Reliance on Fictional Stories,” Memory and Cognition 34, no. 5 (2006): 1140–49.

37. Lisa K. Fazio, Sarah J. Barber, Suparna Rajaram, Peter A. Ornstein, and Elizabeth J. Marsh, “Creating Illusions of Knowledge: Learning Errors That Contradict Prior Knowledge,” Journal of Experimental Psychology: General 142, no. 1 (2013): 1–5, 3.

38. For a thorough investigation see David N. Rapp and Jason L. G. Braasch, eds., Processing Inaccurate Information: Theoretical and Applied Perspectives from Cognitive Science and the Educational Sciences (Cambridge, MA: MIT Press, 2014).

39. See also Deborah A. Prentice, Richard J. Gerrig, and Daniel S. Bailis, “What Readers Bring to the Processing of Fictional Texts,” Psychonomic Bulletin and Review 4, no. 3 (1997): 416–20, for evidence of the power of reading to influence reader belief; in this case participants were more likely to accept false facts in a fictional text set at a different university than they were when the same facts appeared in a text set at their own university.

40. David N. Rapp, “The Consequences of Reading Inaccurate Information,” Current Directions in Psychological Science 25, no. 4 (2016): 281–85, 282.

41. Dickens, Bleak House, 5–6.

42. See Shklovsky, “Art as Technique.”

43. For a cognitive perspective see Adam L. Alter, “The Benefits of Cognitive Disfluency,” Current Directions in Psychological Science 22, no. 6 (2013): 437–42.