Artificial Intuition, not Artificial Intelligence
LLMs are Artificial Intuition, not Artificial Intelligence. I believe this is a meaningful analogy.
First, this paper suggests LLMs simulate reasoning by reaching for pre-baked reasoning-like behavior in their training data, rather than baking it from scratch:
LLM's seem to fake both "solving" and "self-critiquing" solutions to reasoning problems by approximate retrieval. The two faking abilities just depend on different parts of the training data (..and disappear when such data is not present in the training corpus..)
And yet, several other researchers report results that seem to indicate that some form of self-critiquing mode seems to help solving mode.
The explanation for this seeming disparity is that the observed self-critiquing power is just approximate retrieval on corrections data informing approximate retrieval on correct data…
This ability to fake solving or critiquing by retrieval gets exposed when LLMs are presented with problems/domains for which they didn't have either the correct data or the corrections data in their training corpus.
An LLM guesses-next-token by assembling good candidates from a hyperdimensional field of associations, their model. They don’t seem to do abstract reasoning. Reasoning has a deep, recursive shape and LLMs are computationally shallow. Token prediction executes a fixed number of computational steps, and returns. No deep recursion.
So, not reasoning, then. Perhaps more like intuition?
What am I doing when I have an intuition? It seems that my intuitions are shaped by my experience, and this produces a sort of field of impressions within my unconscious. When I encounter something new, my unconscious draws from these impressions, assembling related impressions together to produce a gut feeling, or an insight, or a dream.
The brain is a machine for jumping to conclusions. (Daniel Kahneman)
The unconscious is a machine for operating an animal. (Cormac McCarthy)
Much of the thinking we humans do is not reasoned, but intuitive. Reason is powerful, but not primary, an aftermarket part bolted on to our intuition—and what if we could bolt reasoning on to LLMs? But I digress…
Intuition is versatile. It allows us to rapidly infer patterns from incomplete information. A rustle in the bushes? Could be a lion. And if you’re living on the savanna, your ability to form that picture in an instant is the difference between survival and death. Never mind that you may be hallucinating the lion. You jump back, you survive. You would have survived had there been a lion, too. This is more important than the soundness of your reasoning.
Humans particularly excel at two aspects of inductive pattern recognition. the first is relating new experiences to old patterns through metaphor and analogy making. The next time you are in a meeting, see how frequently people reason by analogy, saying things like “this is just like the industry shake-out of 1987” or “this customer reminds me of Company X”…
Second we are not just good pattern recognizers, but also very good pattern-completers. Our minds are experts at filling in the gaps of missing information. The ability to complete patterns and draw conclusions from highly incomplete information enables us to make quick decisions in fast moving and ambiguous environments.
(Eric Beinhocker, 2006. The Origin of Wealth)
Like intuition, LLMs draw insights from training data. Like intuition, LLMs are garbage in, garbage out. Intuition is fast where reasoning is slow. Intuition is broad where reasoning is computationally deep. Intuition hallucinates to fill in the gaps, reason gets stuck in halting problems.
We jump to conclusions, then we tell a story about about it.
Stories are vital to us because the primary way we process information through induction… For example, although no one saw the butler do it, the butler’s fingerprints were on the knife, the butler was caught leaving the scene, and the butler had a motive; therefore the butler did it. One cannot logically prove the butler did it; it is logically possible that someone else did. After all, no one saw the butler do it. But the pattern of evidence leads us to conclude inductively that the butler did it. We like stories because they give us material to find patterns in—stories are a way in which we learn… Pattern recognition and storytelling are so integral to our cognition that we will even find patterns and construct narratives out of perfectly random data.
(Eric Beinhocker, 2006. The Origin of Wealth)
Problems in general are often well posed in terms of language and language remains a handy tool for explaining them. But the actual process of thinking—in any discipline—is largely an unconscious affair. Language can be used to sum up some point at which one has arrived—a sort of milepost—so as to gain a fresh starting point. But if you believe that you actually use language in the solving of problems I wish that you would write to me and tell me how you go about it.
(Cormac McCarthy, 2017, The Kekulé Problem)
Post hoc ergo propter hoc is a good trick for survival. It’s a kind of approximation for causality that is often wrong, but does better than random.
Stories capture these inferences so we can share them with others. If a story solves a problem and does better than random, it gets repeated. Illogical? Superstitious? Evolution doesn’t care. If it works, it works.
Stories are serialized intuition, so perhaps it’s not an accident that LLMs also excel at telling stories? LLMs enhance narrativization, not reasoning.
If LLMs are artificial intuition, rather than artificial intelligence, perhaps we might lean into this as a superpower?
It’s not reasoning, its intuiting.
It’s not hallucinating, it’s dreaming.
It’s not an oracle, it’s a Second Subconscious.