This is AGI
Episodes
Taste and Liability: Software Engineering After AGI
23 Mar 2026
Contributed by Lukas
Will artificial general intelligence wipe out software engineering jobs, or just transform them? In this episode of This Is AGI, I compare the future ...
Why the Pentagon’s Anthropic Scandal Could Be Bigger Than It Looks
16 Mar 2026
Contributed by Lukas
Did the Pentagon’s clash with Anthropic expose a deeper AI security crisis? This episode explores the Anthropic’s supply-chain risk designation by...
Software Engineering Taste Won’t Save Jobs from AI: Clean Code, AGI, and the Future of Programming
09 Mar 2026
Contributed by Lukas
Will human “taste” in software engineering protect developers from AI job displacement? In this episode of ‘This Is AGI’, Alex Chadyuk argues ...
AGI Isn’t Needed to Replace Office Jobs: The Real Threat from LLM Agent Automation
02 Mar 2026
Contributed by Lukas
Everyone is waiting for AGI, but most office jobs don’t require general intelligence to automate them. In this episode, I explain why decades of hyp...
Judea Pearl on Why LLMs Won’t “Scale” to AGI — and What Comes Next (Causal World Models)
23 Feb 2026
Contributed by Lukas
Judea Pearl argues that today’s transformer-based LLMs have mathematical limits that prevent them from internalizing causal world models, so “just...
Agentic AI in the Enterprise: The New Management Skillset for the Age of AGI
16 Feb 2026
Contributed by Lukas
As AGI moves from theory to deployment, the definition of “leadership” is shifting fast. In this episode of This Is AGI, Alex Chadyuk explains why...
Moltbook Human-as-a-Service and the New AI Monopoly Economy
09 Feb 2026
Contributed by Lukas
As AGI advances, “learn a trade” may not save human labor. This episode dismantles Geoffrey Hinton’s jobs hedge, explains Human-as-a-Service pla...
OpenClaw AI Agents start a religion inside Moltbook the first social network run by AI
02 Feb 2026
Contributed by Lukas
A new AI-only social network called Moltbook has exploded with over a million AI agents posting, debating, joking, and even forming a religion. In thi...
Measuring AI Intelligence with Information Theory | Entropy, Generalization & AGI Metrics
26 Jan 2026
Contributed by Lukas
What if AI intelligence could be measured in bytes instead of test scores? In this episode of This Is AGI, Alex Chadyuk introduces a principled, infor...
Why “Intelligence Density per GB” is a Circular Metric (Elon Musk’s AI Adequacy Test)
19 Jan 2026
Contributed by Lukas
Elon Musk proposed “intelligence density per gigabyte” as a metric for AI adequacy — but what is intelligence, really? In this episode of This I...
Elon Musk and AGI Intelligence Density
12 Jan 2026
Contributed by Lukas
In this episode, Alex Chadyuk unpacks the meaning of intelligence density as recently discussed by Elon Musk. Although intelligence has no standard me...
GPT Models are Brilliant Storytellers: Episodic vs Semantic Memory and the Missing Half of AGI
05 Jan 2026
Contributed by Lukas
Are GPT models really intelligent—or are they just brilliant storytellers? In this episode of This Is AGI, Alex Chadyuk explores the critical differ...
This Is AGI (S2E5): Cox’s Proof of Plausibility as Probability
29 Dec 2025
Contributed by Lukas
Over the last three or four centuries, different mathematicians proposed several competing versions of a calculus formalizing the reasoning about plau...
This Is AGI (S2E4): The Art of Conjecture
21 Dec 2025
Contributed by Lukas
How AGI is going to measure the plausibility on uncertain statements in the real-world scenarios of incomplete information so that, among other things...
This Is AGI (S2E3): Will AGI Obey Logic?
15 Dec 2025
Contributed by Lukas
A charge is often laid at the door of the large language models (LLMs) that they rely on probabilistic generation, assuming that this is somehow a bad...
This Is AGI (S2E2): Will AI Find God?
08 Dec 2025
Contributed by Lukas
Will artificial intelligence discover God?
This Is AGI (S2E1): Classes, Attributes & Relationships
01 Dec 2025
Contributed by Lukas
We take objects, classes, and relationships for granted, but they are just conventions we’ve agreed to use, not truths carved into reality. In this ...
This Is AGI (S1E12): The Future of Work
24 Nov 2025
Contributed by Lukas
There is no question that the proliferation of the AI throughout the global economy will result in a massive reformat of the job market. A lot of jobs...
This Is AGI (S1E11): AI Cyberattacks
17 Nov 2025
Contributed by Lukas
A Chinese government sponsored cyberattack leveraged the American AI technology and infrastructure against the American government agencies and corpor...
This Is AGI (S1E10): Yann LeCun, JEPA & the World Models
10 Nov 2025
Contributed by Lukas
Before humans act, they imagine outcomes. AI doesn’t. Yann LeCun thinks that must change, and his JEPA architecture could be the missing link betwee...
This Is AGI (S1E9): The Curse of Inconsistency
03 Nov 2025
Contributed by Lukas
The advent of large language models (LLMs) fundamentally changed the behaviour of computer systems that we learned to trust over the last several deca...
This Is AGI (S1E8): World Models Wtf?
27 Oct 2025
Contributed by Lukas
In this special Wtf? on World Models, I will give you a break-down of what this whole hype is about, where it is misplaced and why it is a good kind o...
This Is AGI (S1E7): Latent Spaces Wtf?
19 Oct 2025
Contributed by Lukas
In this special Wtf? episode we unpack the concept of a latent space. You will see why it is so important for understanding both LLMs and the future s...
This Is AGI (S1E6): Succession
13 Oct 2025
Contributed by Lukas
Will the artificial super intelligence force humans into submission? Throughout the human history, a nation that commanded a superior learning capabil...
This Is AGI (S1E5): Can AGI cure cancer?
06 Oct 2025
Contributed by Lukas
Can we cure cancer with artificial intelligence? In this episode, we start unpacking the capabilities that an agent with artificial general intellig...
This Is AGI (S1E5 Trailer): Can AGI Cure Cancer?
01 Oct 2025
Contributed by Lukas
You have to give it to Sam Altman. He can make even the great and powerful Wizard of Oz blush.Altman can say something like: “You can choose to de...
This is AGI (S1E4): Hallucinations
29 Sep 2025
Contributed by Lukas
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex age...
This is AGI (S1E4 Teaser): Hallucinations
26 Sep 2025
Contributed by Lukas
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex age...
Do LLMs Learn?
22 Sep 2025
Contributed by Lukas
In this episode of 'This is AGI', we unpack Adrian de Wynter’s large-scale study on how LLMs learn from examples, their limits in generaliza...
Define 'artificial general intelligence'
16 Sep 2025
Contributed by Lukas
What is AGI, really? In this episode of This is AGI, we cut through the hype to unpack the elusive definition of artificial general intelligence. W...
Is AI rational?
16 Sep 2025
Contributed by Lukas
In this episode of This is AGI, we grade modern AI on five markers of rationality—precision, consistency, scientific method, empirical evidence, an...