As I argued here, it would would be more accurate (if less snappy) to describe AI as “powerful modelling and prediction tools based on pattern recognition across very large datasets”. It is, in other words, not a type of cognition in its own right, but – to borrow a term from Marshall McLuhan – one of the “extensions of man”: specifically a means of extending cognition itself.
I don’t think this is correct; what LLMs do is not the extension of cognition but rather the simulation and commodification of the palpable products of cognition.
The people who make LLMs have little discernible interest in cognition itself. Some of them may believe that they’re interested in cognition, but what they’re really focused on is product — that is, output, what gets spat out in words or images or sounds at the conclusion of an episode of thinking.
Seeing those products, they want to simulate them so that they can commodify them: package them and serve them up in exchange for money.
This doesn’t mean that LLMs are evil, or that it’s wrong to sell products for money; only that thinking itself is irrelevant to the whole business.
UPDATE: From a fascinating essay by Dario Amodei, the CEO of Anthropic:
Modern generative AI systems are opaque in a way that fundamentally differs from traditional software. If an ordinary software program does something—for example, a character in a video game says a line of dialogue, or my food delivery app allows me to tip my driver—it does those things because a human specifically programmed them in. Generative AI is not like that at all. When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does—why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate. As my friend and co-founder Chris Olah is fond of saying, generative AI systems are grown more than they are built—their internal mechanisms are “emergent” rather than directly designed. It’s a bit like growing a plant or a bacterial colony: we set the high-level conditions that direct and shape growth, but the exact structure which emerges is unpredictable and difficult to understand or explain. Looking inside these systems, what we see are vast matrices of billions of numbers. These are somehow computing important cognitive tasks, but exactly how they do so isn’t obvious.
UPDATE 2: Essays by Melanie Mitchell of the Sante Fe Institute — one and two — on what LLMS do instead of thinking. The “bag of heuristics” idea is a vivid one.