congress / reading room


Jacek Dukaj

Of all the traditional models of consuming culture, reading is the only one to which man is not predisposed through evolution.

Our ability to listen to music, look at images, and watch theater performances depends on skills honed by our brains and senses over millions of years. The mere ability to perceive characters on paper is not enough to be able to read; reading involves a highly complicated process whereby symbols are transformed into meaning. It is a product of the training that each of us must undergo before we enter into the empire of the book.

This training, along with the repeated experience of reading, changes our cognitive structure in a fully quantifiable way. Neurological scans reveal which parts of the brain are activated during reading; we can observe differences at both the micro and macro scale. Experiments have shown that merely imagining letters is enough to fire neurons in different places, depending on whether the characters are upper or lowercase.

Achieving fluency in a given language involves nothing short a complete rewiring of the brain, i.e. developing neural pathways and modes of activity that best respond to the nature of that particular language. In Proust and the Squid. The Story and Science of the Reading Brain, Maryanne Wolf cites numerous fascinating examples of the above. The English reader, whose language uses the Latin alphabet, relies almost exclusively on his left hemisphere, particularly its posterior area. People who use the Chinese writing system, a logographic script, recruit both hemispheres while reading. Japanese readers offer more insight: their language uses the kana syllabary for proper nouns, loanwords, technical vocabulary, etc., while the kanji logographs closely resemble their Chinese counterparts. Brain scans show that Japanese readers “switch” between the areas of activity employed in Chinese and English, even when deciphering the same words written in kana and kanji.

Can these differences be attributed solely to variation between languages, or do they also stem from how we use them? The history of mankind has seen at least one revolution: the transition from the culture of the spoken word to the culture of the written word. Grzegorz Jankowicz cites Socrates’s famous protests in Plato’s Phaedrus as an example of the unfounded fear of advances in civilization. But let us stop to consider whether Socrates might in fact have been right.

A change has occurred. Those brought up in the culture of the written word (especially the printed word) have a different way of speaking and thinking, a different way of remembering, and they perceive the world differently as well.

No longer do we learn by listening and talking, shaped by our personal relationship with a teacher (or master); we learn through visual aids, from dead, immutable text, “from no one.”
Socrates regarded learning as a dialog: a dialog with someone or with oneself. Homeric Greek could not be used to describe thought processes as completely as we now do now, as if we were reading our minds.

In oral culture, the memory of the individual serves the purpose of a library. How many pages of prose or poetry have you known by heart for many years? Memorizing a poem or a song is an impressive feat by today‘s standards, a special skill possessed by thespians and performers. Meanwhile, my grandparents, great–grandparents, and their peers could recite extensive passages of classic literature off the top of their heads, even at a ripe old age.

If you learn by ear and rely solely on your memory, your immediate reaction is not to question what you hear; your first priority is to remember the message as faithfully as possible. It is only with the proliferation of writing that we developed an interpretative approach along with our modern critical thinking apparatus.

More important than his claims of writing’s detrimental effects on memory was Socrates’s objection regarding “false understanding”: you could read the entire body of knowledge in a given field and consider yourself competent in it without actually understanding what you have read. All you would have is knowledge, i.e. information. If you learn through personal interaction with an interlocutor — a teacher — you cannot skip ahead to the next stage of your discourse without comprehending the essence of the current stage. A person can tell if you don’t understand what you’re talking about; a book cannot.

These differences are objectively quantifiable. Studies conducted by Portuguese neurologists involving MRI scans of 60–year–olds with varying levels of education have shown that illiterates conduct linguistic tasks in their frontal lobes, even in conversation, while readers will activate their temporal lobes.

The transition to a culture based on digital communications technology — and man’s transition to a brain rewired by this technology — is just as monumental a shift as the one witnessed by Socrates.

The media debate surrounding the “internet reading curse” began with a cover piece by Nicholas Carr published in the July 2008 issue of The Atlantic, titled “Is Google Making Us Stupid?”. Carr later expanded the article into a book, The Shallows. Other media outlets were quick to pick up on the topic, which was even covered by paper magazines and newspapers, in Poland as well.

Journalists cited the results of an experiment conducted by Gary Small of UCLA, which received wider attention thanks to Carr. Small ran two groups of volunteers through an MRI: the first consisted of internet–savvy computer users, while the second group was composed of newbies. The brain scans showed clear differences in brain activity in subjects while they were conducting internet searches: the activity in the brains of the experienced internet users was much more developed, especially in the prefrontal cortex, which is responsible for decision–making and complex reasoning. Small then gave the participants a six–day break, during which the novice surfers were asked to browse the internet for one hour day. Brain scans conducted after this period showed dramatic changes in the cognitive activity patterns of the new internet users: their brains were virtually indistinguishable from those of the web–savvy group. Just a few hours over the course of a week were enough to turn them into slightly different people who thought just a bit differently.

While the clinical studies are inconclusive, they remain consistent with anecdotal evidence provided by internet users; Carr was simply the first to compile and publicize these accounts.
“Immersing myself in a book or a lengthy article used to be easy. (…) Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text.”
“My thinking has taken on a ‘staccato’ quality. I can quickly scan short passages of text from many sources online, but I’m no longer able to read War and Peace.”

Even when they want to, they just can’t “switch” their brains back through sheer willpower: internet fluency comes at the cost of traditional reading skills. You simply can’t have both types of brain at the same time.

I’ve noticed that the mere availability of the internet switches my brain into the “Google” mode of thinking, making it extremely challenging to focus and write in the traditional fashion. I have no choice but to physically cut myself off from the influx of information, so that my mind can shift into “book thinking” and stop skipping from one association to the next in search of new stimuli. But closing the browser window or putting a few yards between myself and the computer isn’t enough: I still know that I am a mere second — just one decision — away from that stream of information; my mind remains at full alert.

The same is true of my attempts at reading profound narratives in front of a running computer with access to the internet, even if I’ve switched off the monitor. Hence the ever–growing popularity of having two strictly separate computers: one connected to the internet and an offline machine used exclusively for work.

Then again, what if it’s just a question of form? Are we capable of absorbing the same data equally well if they are conveyed in the appropriate “internet mode”? A test of this premise was conducted as early as 2001: one group was asked to read a story in the traditional form, while the other was provided with an online (hypertext) version. Not only did the internet readers need more time to complete the assignment, they were also seven times more likely to report that they had not understood what they had just read. A similar study comparing text and multimedia messages produced similar results.

Large–scale studies have been conducted in which researchers “spied” on the online activities of internet users. Even web server logs from research sites show that the average user reads at most a few paragraphs of a paper before skipping ahead to the next one. They might download lengthier articles, but how likely are they to come back to them later?

The business world has been privy to this fact for quite some time now: just look at how the content featured on commercial websites is split into several tiny portions, each on a separate page.

There is even an internet acronym used to describe this overwhelming desire to abandon excessively elaborate treatises: TL;DR.

Too Long; Didn’t Read.

There are deeper changes under way, ones that go beyond how we acquire information and touch the very core of our thought processes.

Friends in academia tell me about the growing discrepancy between the older generations of students and the ones “brought up on Google.” It is becoming increasingly difficult to enter into meaningful dialog with the latter, a dialog based on methods of reasoning typical for “book culture.”

Reasoning that involves juggling long chains of implications appears to be a growing challenge.

A → B → C → D → E

On the other hand, “Googlefied” minds are adept at reasoning “via association,” and can use this method to operate on large arrays of data.

(A, B, C, D) (E, F, B) (A, C, E, G, H, I)

Associative reasoning is replacing inference and implication in almost every area of our lives. It has long become the norm in advertising, but political debates and even articles in the press have begun communicating with us in a manner that abandons logically constructed arguments in which one concept results from another, favoring instead thundering salvos of disassociated statements. “Poland is beautiful.” “Three times three equals nine.” “My opponent is a boor.”

Implication has ceased to even serve a rhetorical purpose. Former minister Janusz Kaczmarek, for instance, uses the word “therefore” as a comma; does anyone even notice the lack of a logical relationship in his statements?

Google has raped our brains.

Readers also write. (Or perhaps I should say: writers also read?)

Google, of course, is merely the symbol and lens that focuses the modern technology and social processes that have been changing our lives since the beginning of the computer revolution. But the relationships work both ways: it’s not just how we consume content, but how we produce it that matters.

There is an oft–repeated anecdote about Nietzsche and his experience with the typewriter, which his failing health forced him to start using in 1882. His friends immediately noticed a difference in his writing: the thinker had become even more concise and aphoristic. Heidegger expressed utter disdain for the machine on principle, regarding it as the cause of an even more brutal separation between the object of writing (the word) and its subject (the writer). To the layman, the rituals performed by literary authors come across as downright magical. Nabokov wrote standing up, scribbling away on thousands of note cards which he would later rearrange in arbitrary order, a process reflected by the intricate structure of his novels. Capote, on the other hand, would write lying down, and followed a specific process: he would write the first and second draft in longhand, in pencil, before typing up the final copy. When Neal Stephenson transitioned from sci–fi to writing historical fiction, he set his computer aside, choosing instead a fountain pen and reams of paper.

But these are not empty rituals: they reflect the natural conditioning of the mind. Wittgenstein (in his later period) believed thinking to be a matter of practical, physical activities. To think is to operate on symbols: with the hand (when we write) and with the mouth and throat (when we speak). “It thus makes sense to say: ‘we think with our lips,’ ‘we think with a pencil and sheet of paper’.”

Nowadays, we think with text editors, e–mails, tweets, text messages, and the keyboards of our computers and cell phones.

When writing in text editors, we experience language as something that is infinitely liquid. Such literature can be symbolized by water, while wood could be the symbol of paper literature.

E–text spills over time and space.

It spills over time because it never freezes into a concrete, immutable form. Open the Bible in Microsoft Word and press any key — you’ve just modified the Scripture. What this entails, in a sense, is the end of the kind of interpretative thinking fostered in traditional written culture: the word has once again lost its permanent, objective form.

It spills over space because simple operations in a text editor can cut, copy, and paste entire blocks of text. The disastrous ease with which we can move and change the course of events in a story translates directly into the rhythm of narration in contemporary literature. A new form of repetition has become the bane of editors and proofreaders, who not only find repeated words, but entire duplicate passages. Authors use the copy/paste function so many times that even they forget what goes where. The text melts away in the writer’s own mind.

Writing in a text editor limits our minds and sight to the excerpt currently visible on–screen. The window of our “narrative attention span” is thus significantly shorter than it would be in traditional, paper–based writing. This quality is directly tied to the replacement of inductive reasoning with associative reasoning. We have developed a habit of reading through “aperture” vision.

Imagine reading as a process whereby the light of the mind illuminates an obscured linguistic fresco.

The headlamps have been removed from our foreheads and replaced with penlights.

What effect does all of this have on the literature being written today?

One would be hard pressed to formulate a clear diagnosis. In practice, both authors and readers have been pursuing two mutually–exclusive trends.

On the one hand, there exists a strong conviction that the changes in how we live and how we participate in culture are forcing parallel changes in literary forms. Jerzy Sosnowski correctly points out the symbolic role of Manuela Gretkowska’s writing: there was a moment when most Polish literature resembled her work in its length and structure, and this was considered a matter of historical necessity.

The simplest explanation is as follows:

(1) authors write the way they do because they are no longer capable of writing any other way (the brains of the writers have been rewired);
(2) authors write they way they do because readers are no longer capable of reading any other way (the brains of the readers have been rewired);

Option nr. 2, in turn, involves two possibilities:

(2a) authors are correct in their assumptions (the readers have in fact changed);
(2b) authors are incorrect in their assumptions (the authors merely imagine that the readers have changed).

Then again, a cursory glance at bookstore shelves will show that the above theories are simply not grounded in reality. The growing popularity of the XL–sized epic novel over the past few decades is proof of a completely opposite trend. It is untrue that such books appeared only recently, as a “backlash” against Google culture. Books of increasing length have been a mainstay of popular literature in the West at least since the 1970s. (This likely has something to do with the prevalence of the hardcover standard in the English–language publishing world. The format is the first, most expensive, and most profitable edition to hit the shelves.)

In half of the interviews I’ve given on Ice, the bewildered journalist/critic has opened with a question about the intended audience of that huge book. “After all,” they claim, “no one reads long books anymore.”

This frame of mind has its parallel in the phenomenon of John Kerry or Poland’s Freedom Union. “I don’t know anyone who didn’t vote for Kerry, so how could he have lost?” Influential circles are easily entrenched in their own cultural ecosystem, from which they project their own fast–paced, urban, high–tech images of normality onto the general public. Naturally, it is they who are at the forefront of civilization and are most affected by these changes. It would thus seem that option 2b is the correct choice.

Does this mean that these processes will spread and ultimately encompass us all? Are we one step away from becoming neurologically incapable of reading a passage longer than a blog post unless we shut ourselves off in some offline hermitage?

The diagnosis strikes me as overly simplistic; the alternatives are too black and white. It is true that books change, but they change differently. It may be harder to read War and Peace today, but we still enjoy longer titles, albeit ones with a different structure.

Newer novels are hybrids of sorts, combining the characteristics of the classic, heavyweight epic with the literature of the Google generation. Through trial and error, the novel has developed a format that can effectively disarm the TL;DR mechanism in the minds of the readers.

And it is not alone in this evolution: the combination of long narrative forms with the Google sensibility is something I consider to be a symbol of the first decade of the 21st century, a time marked by the remarkable growth of the TV series, the innumerable episodes of which are watched in marathon sessions. TV shows have become our default form of storytelling. Computer games have also become a form of entertainment enjoyed for hours on end (or even all night).

The key element of “Google literature” is the narrative micromodule. This characteristic is immediately visible in the visual layout of the text itself. Take any book in Stieg Larsson’s Millenium series (the hallmark of the contemporary bestseller, each over 600 pages in length) and open it at random. The blocks of text are the direct equivalent of modern editing in a TV show: shot, shot, brief dialog, cut.

This leads us to another characteristic shared by books and shows: simultaneity. The events are split among several subplots, locations, and protagonists (or more, as in the case of Stephen King novels, techno–thrillers, and fantasy sagas). The novel (and TV show) can thus artificially fulfill the Googlefied brain’s irresistible urge to skip between new data streams: if a book won’t let you launch a new application, pull up a new website, or open up an IM window, it can take you to a new plot line, new scenery, and a new character before that Pavlovian response kicks in. And it does this again, and again, and again.

Encapsulation is another powerful attribute shared by literature and TV shows. It allows our brains to confine themselves to the narrow area illuminated by the penlight of our attention spans. Even if you’re watching half a season back–to–back or reading a thick volume, the content of single episode or a few dozen pages is all you need to remember in order to keep up with the plot. In TV shows, encapsulation is provided by the self–contained episode structure and the telegraphic recap of events shown before the title sequence. The older generation of readers (and reviewers!) complain about the wordiness and unnecessary repetitiveness of Googlefied novels. But that redundancy is precisely what keeps the Googlefied reader from getting lost.

A book is not a blog, but in any given moment it cannot demand more of them in the reading process than blogs do

In reality, however, it is not the blogs that are making new inroads in online communication. The brunt of the growth has shifted to even quicker and more concise internet services such as Twitter and Blip, along with their characteristic narrative models (“bare your soul in 140 characters or less”), as well as textual and non text–based social networks.

The internet has spawned an extraordinary abundance of written content. Never in the history of mankind have average people written as many letters as they did in the heyday of e–mail, a medium that is currently in decline. All the signs point to the domination of the written word online as being a passing stage, a direct result of the technological limitations on data input and output, as well as its administration, organization, and searchability. It was (and still is) simply too difficult to conduct operations on sound, images, and film using affordable and user–friendly software and hardware, at least when compared to text. The industry, however, has been pumping ever–growing sums into R&D, and we may expect audio and video formats to gradually replace text as a medium of communication.

What changes will occur in our brains? How will books adapt to these changes? We can only guess, but we have reason to be hopeful about the adaptability of literature and its resilience — even when it comes to such conservative genres as the novel — in the face of the onslaught of new media.

A different brain reads – slightly differently and slightly different books – but, in spite of everything, it still READS.

To paraphrase StanisÅ‚aw Lem, even if “everyone writes,” no one reads; and if they do read, they don’t understand; and even if they do understand, they don’t remember.
Many commentators have expressed fears of a deluge of written material, along with a resulting devaluation — or downright desecration — of literature. (A quick Google search reveals dozens of forums and websites devoted to teaching writing and sharing ways to publish your own book.)

I regard these fears as the product of a false understanding of literature as an art — a fetishization of the very act of writing.

Is the production of written material the true essence of literature, or is it what we convey through it, what we conjure up in the minds of our readers? The campaign against illiteracy has not made everyone an artist and writer, and neither will popular access to the means of publication. Vanity presses are not a new invention. Anyone can declare themselves a writer and produce a long list of publication to prove it. So what?

The absolute freedom to publish granted by the internet, whether it applies to literature or other forms of artistic expression, should have a purifying effect, stripping the creative act of its ornamentation and rituals.

Just because you can write, doesn’t mean you’re special. Nor does writing a story or a novel make you special. What matters is quality, originality, and power.

Originally published in "Tygodnik Powszechny"
Translated by Arthur Barys