[gtranslate]
AI

Apple says language models don’t understand?

But what if intelligence has always been an illusion? What If AI Isn’t Broken? What If We Are?

Are you human, or just a sophisticated pattern-matching machine? Before you answer too quickly, consider this: Apple’s October 2024 research paper has resurfaced, suggesting that large language models (LLMs) don’t actually reason — they just memorise.

In their GSM-Symbolic paper, Apple researchers designed math problems with superficial tweaks that shouldn’t confuse a truly reasoning mind. The results? State-of-the-art LLMs flailed. They failed spectacularly on problems that only looked different but were logically identical.

To some, this is a death knell. “See! These models are just parrots with more RAM!”

But here’s a more unsettling possibility: What we call human intelligence might itself be nothing more than sophisticated pattern matching—the very thing we’re quick to dismiss in AI systems.

Reasoning vs. Pattern Matching

reasoning vs pattern matching

The core of Apple’s argument is that current AI doesn’t possess true abstract reasoning. It performs well when problems look familiar, but falls apart when superficial changes are introduced.

Sounds damning.

Except… is that so different from how humans operate?

If you’ve ever watched a student freeze on a math test because a word problem used the word “bicycle” instead of “car,” you know how fragile human reasoning can be. Consider how medical students often misdiagnose when symptoms present in unusual ways, or how even experienced programmers struggle when faced with the same algorithm in an unfamiliar language. We say we reason. But most of the time, we pattern match. We memorise, right?

Maybe Memorisation Is Intelligence

memorisation is intelligence

What is intelligence, really? Is it some magical, ethereal process? Or is it the ability to recognize and apply patterns in the real world?

When you solve a puzzle, are you reasoning? Or are you subconsciously comparing it to every puzzle you’ve seen before, as philosopher Daniel Dennett might suggest with his “competence without comprehension” framework?

LLMs like GPT-4 or Gemini are statistical machines trained on mind-bending amounts of text. They work because they’ve seen billions of examples. They memorize relationships. They map form to function.

So do we.

The Illusion of Understanding

Apple’s findings don’t prove that LLMs are broken. They prove that we expect them to be something they’re not. Or maybe something no mind—silicon or carbon-based—can be.

If a machine can answer medical questions, write novels, pass the bar exam, and generate working code faster than a human, does it matter if it “truly understands” in some ineffable sense that we can’t even properly define?

Or is the need for “understanding” just a comforting story we tell ourselves to maintain our species’ exceptionalism?

Machines Are Holding Up a Mirror

holding up a mirror

Maybe the real discomfort isn’t that AI lacks understanding. Maybe it’s that, deep down, we don’t either. Cognitive science increasingly suggests that our sense of conscious understanding might be more post-hoc explanation than causal driver of our thoughts.

When machines start doing what we do, but better, we scramble to redefine what “real” intelligence is. Maybe it’s not memory, not pattern matching, not solving problems. Maybe it’s something else… something conveniently unmeasurable.

But what if there is no something else? What if intelligence is just memory, abstraction, pattern recognition, and statistical approximation — layered beautifully enough that it feels like magic? What if your certainty that you “understand” concepts is itself just another pattern you’ve learned to recognise?

If that’s true, then AI isn’t exposing its flaws. It’s exposing ours.

Beyond the Illusion

beyond illusion

Apple’s paper doesn’t kill AI. It just challenges our mythology. And maybe it’s time we let that mythology go.

As AI continues to evolve, perhaps the question isn’t whether machines can truly understand like humans do—but whether our own understanding has always been an elegant illusion. Maybe the real breakthrough will come not when AI becomes more like us, but when we finally recognise how much like AI we’ve been all along.