The New York Times (April 16) just picked up a concept that’s been circulating in AI research circles for a couple of years: "jagged intelligence." The term describes the uneven capabilities of AI — its capacity to answer complex math questions while failing laughably at simple common sense tasks.  Reports of such  “confusing uneven capabilities,” as the recent story in The Globe and Mail (January 19), often leave the most important question unasked: can these machines actually think? What kind of meaning is produced in algorithmic “thinking?” Paul Kockelman, an anthropologist at Yale, answered these questions in a conference keynote in May 2023, then expanded the lecture into Last Words: Large Language Models and the AI Apocalypse with Prickly Paradigm Press in 2024.  The pace of theoretical reflections in the last two years has tested some of his empirical claims about the limitations of machine learning, but his unusual expertise – and wit –provides a deep and enduring insight into the problem.

The “jagged intelligence” frame of recent date is essentially one more response to the productivity question about which tasks can AI handle, and which fall outside its reach. It’s a question that’s obsessing corporate managers in all sectors of business and has important echoes in contemporary debates about higher education. Ben Wildaversky’s Washington Monthly article (April 15) is one of many that argues for the urgent need for a liberal arts education and the “only by human questions” that can’t be answered by machine learning.  But the possibilities and limitations of AI in business or education rest on an understanding of how machines make sense – not how intelligence is limited or infinite, but the particular ways in which it’s produced.  And here’s where Kockelman’s pamphlet makes a signal contribution.

Last Words does a lot of work for a small book, starting with a lucid account of how LLMs actually work, from pretraining on next-word prediction to fine-tuning through reinforcement learning with human feedback (RLHF). Everything that LLMs do – summarize, translate, generate code, hold a conversation, diagnose illness, pass the bar exam – is about predicting the next word in a sequence of words. Building on the work of the American philosopher and mathematician Charles Sanders Peirce, Kochelman shows how different this is from meaning-making by humans: human cognition is about referencing a world outside the mind – making meaning from a gap or “slash,” as Kockelman puts it, between language and the world.  LLMs, with no access to the world, can only create a gap between earlier and later parts of a text: they can only be about “cotext” (word-word relations), not “context” (word-world relations).  LLMs can literally make sense, but they cannot reference – make sense of the world.  Thus when AI is “brilliant,” it’s because the task can be solved by predicting matching patterns in texts.  When it’s “stupid,” it’s because the task requiring anchoring language to things, situations, “reality” itself, something it can’t be trained to do.

“Jagged intelligence” comes from the capacity to process unthinkably large amounts of texts (and images), but without experiencing the physical world.  But Kockelman’s approach suggests how the shape of “jaggedness” is itself predictable, once you understand what the machine is trying to do. It’s an important point, because it allows a critique of the hype.  Big Tech is profiting – or at least, trying hard to – from our own predictable tendency to infer and claim profound “intelligence” from machine performance.  Kockelman shows how the absence of a “reality principle” – the inability of the machine to be wrong and to know it, and to correct itself against something outside the text – is what makes human meaning different.

Last Words is less than 140 pages, but it’s packed with concepts that help make sense of the AI debate. No matter how “fine-tuned” the machine becomes – and it’s clear that multimodal inputs, embodiment in robots, and other forms of machine learning will make it seem that LLMs have overcome the “reality principle” – it can only approximate human reasoning.  Prickly Paradigm Press is proud to offer Last Words as a physical thought companion – soon available for free to read online -- that, far from simply dismissing the capacities of AI, makes sense of its inherent limitations. 

Reply

Avatar

or to participate