On Accidental and Parasitic Language

What “Against Theory” is saying, in the end, is simple: we attribute intention in order to read anything, and we must do that. 

by Lisa Siraganian

“Can Computers Create Meanings?” The point of the wave poem in Knapp and Michaels’s “Against Theory” was neither to ask that question, nor to ask if oceans can intend to write messages in the sand. The point was to show why the kinds of responses you come up with to account for the wave poem event only fall into two categories. Either you assume some entity intending to mean, or you assume an account of natural accidents producing marks resembling words. In the first instance, you are assuming intention and meaning exists, albeit a very unlikely version of it. In the second instance, you assume the marks are really “natural accident[s]” that “merely seem to resemble words”—and seem to resemble language and poetry. In this second case, the marks can only be understood as authorless and thus as “accidental likenesses of language”—likenesses that are intentionless and, accordingly, meaningless and uninterpretable. The question raised by the wave poem phenomenon, then, is whether to interpret or not (pp. 16, 24). And that question has not changed; it is still the one we have to answer.

[ Photo Credit: Jr Korpa/ Unsplash ]

What if “accidental likenesses of language” are everywhere? In a tweet reacting to my nonsite essay, “Against Theory, Now with Bots: On the Persistent Fallacy of Intentionless Speech,” Matt Kirschenbaum wonders what changes about Knapp and Michaels’s argument if “you encountered such things every time you left your house, on every street and sidewalk? . . . Because that’s now the internet.” As I understand it, Kirschenbaum’s suggestion is that the ubiquitous presence of “accidental likeness” writing, generated by a sophisticated LLM like ChatGPT, has altered the way meaning works. Alternatively, the claim might be that LLMs have revealed hidden truths about the way meaning has always worked. Either ChatGPT changed meaning or ChatGPT revealed a new truth about meaning. Either way, that situation apparently entails understanding computer-generated writing in a third category, as Seth Perlow envisions: “an LLM can be seen as an automatic language game from which emerge meanings that are very hard to read as empty or purely accidental.” Computer writing then would not be understood either as intentional or accidental but as something else.

But exactly how does the presence of a trillion more bot poems on the Twitter beach render any single one of them intentional versus accidental? Perhaps the assumption is that once we are facing so many accidents, something must be going on that is changing or revising what this accidental writing constitutively is. But even if we grant this dubious conclusion, it presents no real dilemma for our two options: to interpret or not. We are simply in the first situation imagined by “Against Theory.” That is, we have decided that, at least sometimes, computers can mean what they say. To put it in N. Katherine Hayles’s terms, we’ve determined that computational media “are capable of meaning-making practices within their umwelten,” which includes “making interpretations.” Imagining intentionality-creating computers is entirely consistent with the “Against Theory” framework (even if, as Knapp and Michaels point out, that is a claim that would require empirical confirmation).

You might still object that something different is at work, following Perlow’s invocation of an “automatic language game.” If ChatGPT can generate text that has all the formal features of a poem, why can I not interpret that text as a poem? First of all, again, “Against Theory” is not telling you what you can and cannot interpret; it’s identifying the background premises to interpretation. What can be so difficult to see in an example like this one is that the preliminary step for interpreting is the presumption of intention. If you decide to read and interpret the ChatGPT text as a poem, you have already presumed that it was intended. The premise is so deeply ingrained, we forget it’s operating; the point of the wave poem is to make you see the presumption.

The conjecture of intention is required to make the signifier meaningful as a sign—whether that intention is correctly identified or totally misinterpreted, whether it remains uncertain or is, after the fact, denied by readers. Furthermore, not just signifiers but the formal features of any poem are only formal if they are intended. That’s what makes them formal. Otherwise, they are just the shapes or codes (inputs generating outputs, causes producing effects) that computers have been programmed, by some human being, to produce. To call something a “formal feature of a poem” is not a description of a thing or an event in the world; it’s a construction of an intention in a poem.

What “Against Theory” is saying, in the end, is simple: we attribute intention in order to read anything, and we must do that. Every time you read a computer-generated tweet on Facebook or Twitter or Reddit, it is only possible because you do imagine an author with intentions. In our ChatGPT world, we are in a situation where the poem-simulacra have gotten exceptionally good. We aren’t very well-equipped to decide, in these new kinds of cases, when there is someone actually intending one or not, likely because we have far more reasons to assume intention than not. But that’s really just to say that LLMs work because they are not parasitic on trees or rocks but on intentions. And it cannot be underscored enough that the reason these simulacra works so well is that they aggregate, copy, slightly vary, and reorder sentences from Reddit, Wikipedia, and published books, all of which people have already written and, thus, already intended.

LLMs, rather than bring a challenge to “Against Theory’s” account of intention, merely mask it. But if ChatGPT has no consequences for the basic alternatives proposed by “Against Theory,” that does not touch the question of whether ChatGPT has consequences to the world. It is already obvious that those are immense and disturbing.

Lisa Siraganian is J. R. Herbert Boone Chair in Humanities and professor of comparative thought and literature at Johns Hopkins University. Her most recent book is Modernism and the Meaning of Corporate Persons (2020), winner of the MSA Book Prize and the MLA’s Matei Calinescu Book Prize.