Coordinated with Fredrik
The Illusion of Intelligence: Why Statistical AI Will Break Long Before the Grid Does
24 Nov 2025
There’s a dangerous seduction in the frontier of AI right now. Models grow bigger; their outputs shimmer with coherence; their corporate parents brag about breakthroughs every quarter. We’re told these systems are “intelligent,” “aligned,” “safe enough,” and increasingly marketed as replacements for human reasoning itself.This episode of Coordinated with Fredrik steps directly into that fog and lights it on fire.We didn’t talk about hype cycles, product announcements, investment theses, or the standard “AI will change everything” fluff. Instead, we went straight for the jugular: What does it mean to build critical infrastructure on top of systems that, at their core, are statistical ghosts?That is not a philosophical question. It’s an engineering one—and it cuts right to the bone of every power operator, every defense agency, every financial risk desk, and every CEO foolish enough to mistake fluency for reliability.The transcript doesn’t pull punches, and neither should the blog.The Core Problem: AI Is Brilliant in All the Ways That Don’t Matter When Something BreaksLLMs are extraordinary mimickers. They compress the entire textual history of the species into latent space and spit out answers that sound like understanding.But sounding right and being right are not synonyms.The models are built on a single objective: predict the next token. Everything else—tone, narrative, logic, persuasion—emerges as a side effect of statistical training. There is no causal model of the world underneath. No physics. No grounding. No internal consistency check. No understanding of error.This design philosophy works beautifully until you hit the real world, where reality doesn’t care about your linguistic confidence or your probability distributions.And the deeper we went in this episode, the more the foundational cracks widened.Safety Layers Built on the Same Fragile Foundation Will Fail TogetherThe conversation lays out a bleak but necessary warning: the AI safety techniques the industry relies on today—RLHF, reward models, fine-tuning, interpretability tools—aren’t independent safety layers. They are all built on the same substrate, and therefore:When one fails, all fail. Simultaneously. Catastrophically.The term from safety engineering is “correlated failure modes,” and if you work in nuclear plants, aviation, or grid stability, that phrase is synonymous with nightmares.To put it plainly:We aren’t stacking safety layers; we’re stacking different expressions of the same statistical fragility.You can’t build a reliable defense-in-depth system when every layer is made of the same soft metal.The Ghost in the Machine: Why LLMs Break Under StressOne of the most unsettling parts of the discussion is how LLMs behave during long-chain reasoning.A small mathematical misstep early in a multi-step problem doesn’t simply produce a slightly wrong answer. It cascades. It compounds. And the model has no internal mechanism to realize something’s gone off the rails.A ghost predicting its own hallucinations.This is why they fail at:* precise multi-step math* edge-case logic* novel reasoning* rare-event forecasting* high-risk decision chains* anything that requires causal coherence rather than textual familiarityThese limitations aren’t bugs. They aren’t “work in progress.” They are structural. The architecture is fundamentally reactive. It cannot generate or test hypotheses. It cannot ground its internal model in reality. It cannot correct itself except statistically.This makes it lethal in systems where one wrong answer isn’t an embarrassment—it’s a cascading blackout.The March of Nines: Where Statistical AI Meets the Real World and LosesIn energy—and in any mission-critical domain—the holy grail is reliability.Not 90%. Not even 99%. You need the nines.99.9%99.99%99.999%Each additional nine is an order of magnitude more difficult than the last.What this episode exposes is that statistical systems simply cannot achieve that march. It’s not an optimization problem. It’s not an engineering inefficiency. It’s not a matter of “more compute.”The long tail of rare events will always break a system that learned only from the past.LLMs are brilliant at the median but brittle at the edges.Critical infrastructure lives at the edges.So What’s the Alternative? Systems That Don’t Just Predict—They Learn.The second half of the episode explores a radically different paradigm: agentic, experiential AI—embodied systems that learn like animals, not like autocomplete machines.Richard Sutton’s OAK (Options and Knowledge) architecture is one such blueprint. It insists on:* learning from interaction* forming internal goals* developing durable skills* building causal models over timeBiology didn’t evolve by reading a trillion documents. It evolved by experiencing the world, failing, adapting, and iterating through an adversarial outer loop that never ends.LLMs are not alive in that sense. They have no loop. No surprise. No grounding. No experiential correction. No self-generated goals.If we ever want truly reliable artificial intelligence, we will need systems that build themselves through interaction rather than ingestion.This is not convenient for companies chasing quarterly releases—but reality doesn’t care about convenience.Energy Systems Are Too Important to Outsource to Statistical GuessworkIf you’re coordinating a decentralized energy network, integrating millions of DERs, predicting rooftop solar volatility, or steering a virtual power plant through a storm, you don’t need mimics.You need agents that understand cause, effect, uncertainty, and consequence.You need intelligence that does not collapse under rare events or novel conditions.You need systems that don’t just summarize history—they survive it.That’s the central tension exposed in this episode:Will the world keep chasing fast, brittle, statistically impressive tools?Or will we pay the safety tax required to build something real?It’s an engineering question with civilization-level implications.Final ThoughtThe future of infrastructure—energy, mobility, logistics, defense—is about to collide with the limits of statistical AI.This episode isn’t a warning. It’s a calibration. A reframing. A demand for seriousness in a time when the world is drowning in hype.If we’re going to build the next century, we need systems we can trust in the dark when everything else is failing.Statistical ghosts won’t get us there.The Infographic from NotebookLM: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
3ª PARTE | 17 DIC 2025 | EL PARTIDAZO DE COPE
01 Jan 1970
El Partidazo de COPE
13:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
10:00H | 21 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
13:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana
12:00H | 20 DIC 2025 | Fin de Semana
01 Jan 1970
Fin de Semana