Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
magazine 2026-01-17 | 3 min read

The Father of Causal AI Says LLMs Won't Get Us to AGI

Turing Award winner Judea Pearl explains why scaling up language models has mathematical limits that won't lead to artificial general intelligence.

A
Audioscrape Team

As AI companies race to build larger and larger language models, one of the most influential figures in artificial intelligence is pumping the brakes on the AGI hype.

Judea Pearl isn’t just any skeptic. He’s the Turing Award-winning computer scientist whose work on causal reasoning fundamentally shaped how we think about AI. His “ladder of causation” concept distinguishes between association, intervention, and counterfactual reasoning—levels of intelligence that build on each other.

In a recent conversation with Sam Harris, Pearl delivered a clear verdict on whether today’s LLMs are taking us toward artificial general intelligence:

This is a significant statement. While the AI industry pours billions into scaling up compute and training data, Pearl argues there are mathematical limitations that can’t be overcome by throwing more resources at the problem.

What LLMs Actually Do

Pearl’s critique isn’t that LLMs are useless—he calls them “tremendously impressive.” But he draws a sharp distinction between what they do and what AGI would require:

In other words, ChatGPT and its competitors aren’t learning causal models of the world from raw data. They’re summarizing and recombining causal models that humans already created and wrote about.

This is a subtle but crucial distinction. The models are sophisticated pattern matchers and summarizers, not genuine reasoners about how the world works.

The Causation Barrier

Pearl’s ladder of causation identifies three levels:

  1. Association (seeing patterns in data)
  2. Intervention (what happens if I do X?)
  3. Counterfactuals (what would have happened if things were different?)

Each level requires capabilities that can’t be derived from the level below. You can’t get causation from correlation—this isn’t controversial. But Pearl argues this creates fundamental barriers:

No amount of data and compute changes these mathematical facts. Certain types of reasoning require certain types of inputs.

Even Hinton Agrees (Sort Of)

When Harris asked whether other AI pioneers disagree with this view, Pearl’s response was telling:

Geoffrey Hinton—one of the “godfathers” of deep learning who recently left Google to speak freely about AI risks—has apparently expressed similar concerns about the current path not leading to AGI.

The Paradox: Still Worried About AI Risk

Here’s where it gets interesting. Despite his skepticism about LLMs reaching AGI, Pearl takes long-term AI safety concerns seriously:

He doesn’t see “computational impediments” to a recursively self-improving AI that could get away from us. He just doesn’t think the current LLM paradigm will get us there.

What This Means

Pearl’s perspective suggests we might be in a local maximum. Current AI is impressively useful, but it may not be the path to the transformative (or dangerous) AGI that dominates headlines.

A breakthrough—something fundamentally new—would be required. Not just GPT-5 or GPT-6 with more parameters.

Whether that’s reassuring or unsettling depends on your perspective. It means today’s AI risks are more limited than some fear. But it also means the path to genuine machine intelligence, if it exists, might come from an unexpected direction.


Explore more podcast conversations about AI and technology in our search database.

Cite any podcast moment

Search millions of podcast transcripts and embed clips in your content.

Search podcasts →
#ai #agi #technology #llms

Share this article

Related Posts

2026-01-22 | 3 min

America's Allies Are Moving On Without It

As Trump threatens tariffs and eyes Greenland, traditional American allies are cutting deals with China and each other. The old order, one prime minister says, is not coming back.