Brains, Meet Your AI Cousin

 

A fresh report highlighted by ScienceDaily describes research suggesting that when we understand spoken language, our brain builds meaning in stages that mirror the layer-by-layer representations inside large language models. Using electrocorticography recordings during a 30‑minute narrative, the researchers found that later brain responses aligned with deeper layers of models like GPT‑2 and Llama 2, especially in language regions such as Broca’s area.

That framing may sound familiar if you’ve followed modern AI. Many high-performing systems, from speech recognition to image classification to large language models, learn by generating internal expectations and then adjusting when they’re wrong. The new work does not claim the brain is literally running the same algorithms as your phone’s AI assistant. The point is subtler: the logic of prediction and correction may be a shared strategy for making sense of noisy, incomplete information.

Because press coverage can oversimplify, it’s worth holding two ideas at once. First, prediction-driven processing is a serious, long-standing concept in neuroscience. Second, “brains are like AI” is an analogy — useful, but easy to overextend if it is treated as a one-to-one map.

Predictive processing: not new, but getting sharper

Neuroscientists have been discussing prediction-based brain function for decades. A classic formulation comes from Rao and Ballard’s influential paper, “Predictive coding in the visual cortex”, which proposed that higher brain areas send predictions down the hierarchy while lower areas send back residual errors — the surprising bits that were not explained away.

Since then, the broader “predictive processing” family has grown to include accounts of perception, attention, learning, and action. One widely cited umbrella theory is Karl Friston’s free-energy principle, which (in simplified terms) proposes that biological systems act to minimise surprise by improving their internal models and by changing the world (or their position in it) to make sensory inputs more predictable.

What is changing in 2026 is not that neuroscientists suddenly discovered prediction. Rather, newer experimental designs and analytical methods are making the “prediction plus error signal” account easier to test. Instead of remaining a broad claim about what the brain must be doing, the field is increasingly able to ask: where are these prediction errors represented, how fast do they update, and how do they shape what we consciously experience?

Why AI makes the comparison feel newly plausible

Modern AI has made “prediction” feel less like a metaphor and more like an engineering principle. Much of current machine learning can be described as: make a guess, measure the error, adjust the parameters, repeat — at scale.

Large language models are a clear example. At their core, they are trained to predict the next token in a sequence, repeatedly, until the model becomes very good at anticipating patterns in text. The now-canonical transformer architecture behind many of these models is described in “Attention Is All You Need”, which showed how attention mechanisms can help systems focus on the most relevant parts of their input when making predictions.

That does not mean your cortex is a transformer. But it does make the conceptual overlap less speculative. If prediction and error correction can produce surprisingly flexible behaviour in silicon, it is reasonable to revisit whether nature arrived at similar high-level strategies long before we did — especially given the brain’s need to operate under severe constraints such as sparse data, noisy inputs, tight time limits, and strict energy budgets.

In that light, the “AI-like” claim is best read as limited. It suggests the brain is not an all-purpose recorder, but a resource-rational inferential engine: it spends its limited capacity guessing what matters most, then corrects course when the world disagrees.

The critical differences: brains aren’t just bigger models

Even if the prediction-error lens is useful, there is a risk in letting today’s AI dictate tomorrow’s neuroscience. A number of researchers caution that the brain–computer analogy can mislead when it is treated as literal equivalence. As a Nature commentary put it in 2023, “The brain is not a computer — and AI isn’t either”, arguing that both systems are embedded in environments and shaped by constraints that simple “information processing” metaphors often gloss over.

Three differences matter in practical terms:

  • Embodiment and survival goals. Brains evolved to keep bodies alive: maintaining temperature, finding food, avoiding danger, and navigating social worlds. AI systems, by contrast, pursue objectives we specify (or imply via training data and reward signals), and those objectives can be narrow or misaligned.
  • Learning regimes. Many AI models learn from enormous curated datasets and repeated training runs. Humans learn continuously, often from small numbers of examples, and generalise through a mix of perception, action, memory, and social teaching.
  • Hardware and energy. Biology is highly energy-efficient. Popular estimates put the brain’s operating power on the order of tens of watts; Scientific American’s overview, “The brain uses a lot of energy”, notes that even this “small” figure is a major share of the body’s budget. Meanwhile, the infrastructure supporting modern AI includes data centres with rapidly growing electricity demand, tracked in broad terms by the International Energy Agency’s reporting on data centres and AI. The comparison is not apples-to-apples, but it highlights that brains and AI solve problems under very different constraints.

So when a study suggests the brain works “more like AI than expected”, the most responsible reading is that certain computational motifs may converge — not that the brain is a biological version of today’s software stack.

What this could change in neuroscience and medicine

If prediction error is a central organising principle, it offers a unifying way to connect phenomena that can otherwise seem unrelated: illusions, attention, habit formation, and some symptoms associated with psychiatric conditions.

One often-discussed implication is that perception may be closer to “controlled hallucination” than we intuitively think: the brain generates a best-guess model of the world and uses sensory data mainly to correct that model. When this balance shifts — too much reliance on prior expectations, or too much weight on raw sensory noise — experience can change dramatically.

Researchers have proposed links between atypical prediction-error processing and conditions such as schizophrenia and autism, though the strength and specificity of these links remain debated in the literature and should not be treated as settled clinical facts. The value of the framework, at minimum, is that it generates testable questions: if a symptom reflects mis-weighted prediction errors, then interventions might aim to recalibrate that weighting through medication, cognitive therapy, neurofeedback, or targeted stimulation. Whether this translates into better treatments is still an open question, but it provides a plausible map for where to look.

At a systems level, prediction-based accounts can also influence how we interpret brain imaging and electrophysiology. Instead of searching only for “where information is stored”, scientists can look for where predictions are generated, where errors are computed, and how those signals flow through networks over time — a more dynamic picture of brain function.

What it means for AI: inspiration, not imitation

The relationship also runs the other way. If neuroscience continues to clarify how biological prediction works — especially how humans learn robustly from limited data and remain stable under distribution shifts — AI researchers may find new design principles.

For example, brains appear to support continual learning without catastrophically overwriting old skills, a known weakness in many artificial networks. They also integrate multiple modalities (vision, sound, touch, language, and interoception) while staying grounded in action. If the new study strengthens the case that predictive updating is foundational, it adds weight to AI approaches that prioritise uncertainty, world models, and active inference — not just bigger datasets.

Still, there is a temptation to cherry-pick neuroscience for metaphors while ignoring the biological details that make those metaphors difficult to implement. A more productive stance is to treat “brain-like AI” as a research programme: borrow the problems biology has solved (efficient learning, resilience, low power, self-repair), then see whether computational analogues can be built — without assuming a neuron is simply a slower transistor.

A measured takeaway for 2026

The emerging message from this line of research is less “humans are machines” and more “prediction is a powerful way to cope with uncertainty”. The ScienceDaily-highlighted work adds momentum to a view of cognition in which the brain is constantly forecasting, checking, and refining — a process that does resemble how many AI systems learn and operate.

But the resemblance is at the level of strategy, not identity. Brains are living, embodied, energy-frugal organs shaped by evolution; AI is engineered software running on industrial hardware. The useful insight is the shared emphasis on prediction and error — a common thread that may help neuroscience explain more with fewer assumptions, and may help AI become more efficient, robust, and grounded.

WhatsApp
LinkedIn
Facebook