Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

The Diary Of A CEO with Steven Bartlett

Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!

18 Dec 2025

1h 40m duration
15118 words
4 speakers
18 Dec 2025
Description

AI pioneer YOSHUA BENGIO, Godfather of AI, reveals the DANGERS of Agentic AI, killer robots, and cyber crime, and how we MUST build AI that won’t harm people…before it’s too late.  Professor Yoshua Bengio is a Computer Science Professor at the Université de Montréal and one of the 3 original Godfathers of AI. He is the most-cited scientist in the world on Google Scholar, a Turing Award winner, and the founder of LawZero, a non-profit organisation focused on building safe and human-aligned AI systems.  He explains: ◼️Why agentic AI could develop goals we can’t control ◼️How killer robots and autonomous weapons become inevitable ◼️The hidden cyber crime and deepfake threat already unfolding ◼️Why AI regulation is weaker than food safety laws ◼️How losing control of AI could threaten human survival [00:00] Why Have You Decided to Step Into the Public Eye?   [02:53] Did You Bring Dangerous Technology Into the World?   [05:23] Probabilities of Risk   [08:18] Are We Underestimating the Potential of AI?   [10:29] How Can the Average Person Understand What You're Talking About?   [13:40] Will These Systems Get Safer as They Become More Advanced?   [20:33] Why Are Tech CEOs Building Dangerous AI?   [22:47] AI Companies Are Getting Out of Control   [24:06] Attempts to Pause Advancements in AI   [27:17] Power Now Sits With AI CEOs   [35:10] Jobs Are Already Being Replaced at an Alarming Rate   [37:27] National Security Risks of AI   [43:04] Artificial General Intelligence (AGI)   [44:44] Ads   [48:34] The Risk You're Most Concerned About   [49:40] Would You Stop AI Advancements if You Could?   [54:46] Are You Hopeful?   [55:45] How Do We Bridge the Gap to the Everyday Person?   [56:55] Love for My Children Is Why I’m Raising the Alarm   [01:00:43] AI Therapy   [01:02:43] What Would You Say to the Top AI CEOs?   [01:07:31] What Do You Think About Sam Altman?   [01:09:37] Can Insurance Companies Save Us From AI?   [01:12:38] Ads   [01:16:19] What Can the Everyday Person Do About This?   [01:18:24] What Citizens Should Do to Prevent an AI Disaster   [01:20:56] Closing Statement   [01:22:51] I Have No Incentives   [01:24:32] Do You Have Any Regrets?   [01:27:32] Have You Received Pushback for Speaking Out Against AI?   [01:28:02] What Should People Do in the Future for Work?   Follow Yoshua: LawZero - https://bit.ly/44n1sDG  Mila - https://bit.ly/4q6SJ0R  Website - https://bit.ly/4q4RqiL  You can purchase Yoshua’s book, ‘Deep Learning (Adaptive Computation and Machine Learning series)’, here: https://amzn.to/48QTrZ8  The Diary Of A CEO: ◼️Join DOAC circle here - https://doaccircle.com/  ◼️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook  ◼️The 1% Diary is back - limited time only - https://bit.ly/3YFbJbt  ◼️The Diary Of A CEO Conversation Cards (Second Edition) - https://g2ul0.app.link/f31dsUttKKb  ◼️Get email updates - https://bit.ly/diary-of-a-ceo-yt  ◼️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb  Sponsors:  Wispr - Get 14 days of Wispr Flow for free at https://wisprflow.ai/DOAC  Pipedrive - https://pipedrive.com/CEO Rubrik - To learn more, head to https://rubrik.com

Audio
Transcription

Chapter 1: Why does Yoshua Bengio feel compelled to speak out about AI?

0.031 - 11.049 Steven Bartlett

You're one of the three godfathers of AI, the most cited scientist on Google Scholar, but I also read that you're an introvert. It begs the question, why have you decided to step out of your introversion?

0

11.811 - 23.49 Yoshua Bengio

Because I have something to say. I've become more hopeful that there is a technical solution to build AI that will not harm people and could actually help us. Now, how do we get there? Well, I have to say something important here.

0

Chapter 2: What are the potential dangers of agentic AI?

23.47 - 31.868 Yoshua Bengio

Professor Yoshua Bengio is one of the pioneers of AI, whose groundbreaking research earned him the most prestigious honor in computer science.

0

31.888 - 42.171 Steven Bartlett

He's now sharing the urgent next steps that can determine the future of our world. Is it fair to say that you're one of the reasons that this software exists? Amongst others, yes. Do you have any regrets?

0

42.151 - 62.483 Yoshua Bengio

Yes, I should have seen this coming much earlier, but I didn't pay much attention to the potentially catastrophic risks. But my turning point was when ChatGPT came and also with my grandson. I realized that it wasn't clear if he would have a life 20 years from now because we're starting to see AI systems that are resisting being shut down.

0

62.543 - 77.064 Yoshua Bengio

We've seen pretty serious cyber attacks and people becoming emotionally attached to their chatbot with some tragic consequences. Presumably, they're just going to get safer and safer, though. So the data shows that it's been in the other direction. It's showing bad behavior that goes against our instructions.

0

77.404 - 88.218 Steven Bartlett

So of all the existential risks that sit there before you on these cards, is there one that you're most concerned about in the near term? So there is a risk that doesn't get discussed enough, and it could happen pretty quickly.

Chapter 3: How do tech CEOs contribute to the dangers of AI?

88.238 - 94.778 Yoshua Bengio

And that is... But let me throw a bit of optimism into all this because there are things that can be done.

0

94.798 - 112.12 Steven Bartlett

So if you could speak to the top 10 CEOs of the biggest AI companies in America, what would you say to them? So I have several things I would say. Just give me 30 seconds of your time. Two things I wanted to say. The first thing is a huge thank you for listening and tuning into the show week after week.

0

112.2 - 131.306 Steven Bartlett

It means the world to all of us and this really is a dream that we absolutely never had and couldn't have imagined getting to this place. But secondly, it's a dream where we feel like we're only just getting started. And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app. Here's a promise I'm going to make to you.

0

131.326 - 160.353 Steven Bartlett

I'm going to do everything in my power to make this show as good as I can now and into the future. We're going to deliver the guests that you want me to speak to and we're going to continue to keep doing all of the things you love about this show. Thank you. Professor Yoshua Bengio, you're, I hear, one of the three godfathers of AI.

0

Chapter 4: How could AI impact national security?

160.373 - 186.626 Steven Bartlett

I also read that you're one of the most cited scientists in the world on Google Scholar, actually the most cited scientist on Google Scholar and the first to reach a million citations. But I also read that you're an introvert. And it begs the question why an introvert would be taking the step out into the public eye to have conversations with the masses about their opinions on AI.

0

187.727 - 193.053 Steven Bartlett

Why have you decided to step out of your introversion into the public eye?

0

195.536 - 196.537 Yoshua Bengio

Because I have to.

0

Chapter 5: What are the risks associated with AI in the context of job replacement?

197.999 - 224.659 Yoshua Bengio

Because since ChatGPT came out, I realized that we were on a dangerous path And I needed to speak. I needed to raise awareness about what could happen, but also to give hope that, you know, there are some paths that we could choose in order to mitigate those catastrophic risks.

0

225.22 - 227.323 Steven Bartlett

You spent four decades building AI.

0

227.405 - 227.665 Yoshua Bengio

Yes.

0

228.687 - 238.243 Steven Bartlett

And you said that you started to worry about the dangers after ChatGPT came out in 2023? Yes. What was it about ChatGPT that caused your mind to change or evolve?

0

240.707 - 266.158 Yoshua Bengio

Before ChatGPT, most of my colleagues and myself thought it would take many more decades before we would have machines that actually understand language. Alan Turing, founder of the field in 1950, thought that once we have machines that understand language, we might be doomed because they would be as intelligent as us. He wasn't quite right.

266.739 - 297.321 Yoshua Bengio

So we have machines now that understand language, but they lag in other ways like planning. So they're not, for now, a real threat, but they could in a few years or a decade or two. So it is that realization that we were building something that could become potentially a competitor to humans, or that could be giving huge power to whoever controls it.

297.537 - 312.32 Yoshua Bengio

And destabilizing our world, threatening our democracy. All of these scenarios suddenly came to me in the early weeks of 2023, and I realized that I had to do something, everything I could about it.

314.784 - 332.779 Steven Bartlett

Is it fair to say that you're one of the reasons that this software exists? You're amongst others. Amongst others, yes. I'm fascinated by the cognitive dissonance that emerges when you spend much of your career working on creating these technologies or understanding them and bringing them about.

333.279 - 342.793 Steven Bartlett

And then you realize at some point that there are potentially catastrophic consequences and how you kind of square the two thoughts. It is difficult.

Chapter 6: What can individuals do to mitigate AI risks?

369.271 - 398.055 Yoshua Bengio

So I wanted to feel good about all the research I had done. I was enthusiastic about the positive benefits of AI for society. So when somebody comes to you and says, oh, the sort of work you've done could be extremely destructive, there's sort of unconscious reaction to push it away. But what happened after ChatGPT came out is really another emotion that that countered this emotion.

0

398.695 - 425.819 Yoshua Bengio

And that other emotion was the love of my children. I realized that it wasn't clear if they would have a life 20 years from now, if they would live in a democracy 20 years from now. And Having realized this and continuing on the same path was impossible.

0

425.839 - 438.836 Yoshua Bengio

It was unbearable, even though that meant going against the fray, against the wishes of my colleagues who would rather not hear about the dangers of what we were doing.

0

Chapter 7: How does public opinion influence AI regulation?

440.619 - 442 Yoshua Bengio

Unbearable. Yeah.

0

444.484 - 444.724 Oscar Piastri

Yeah. Yeah.

0

446.797 - 473.45 Yoshua Bengio

I remember one particular afternoon and I was taking care of my grandson, who was just a bit more than a year old. How could I not take this seriously? you know, our children are so vulnerable.

0

474.492 - 491.238 Yoshua Bengio

So you know that something bad is coming, like a fire is coming to your house and you see, you're not sure if it's going to pass by and leave your house untouched or if it's going to destroy your house and you have your children in your house. Do you sit there and continue business as usual? You can't.

0

Chapter 8: What is the future of AI and human jobs?

492.28 - 496.767 Yoshua Bengio

You have to do anything in your power to try to mitigate the risks.

0

498.333 - 508.596 Steven Bartlett

Have you thought in terms of probabilities about risk? Is that how you think about risk is in terms of like probabilities and timelines? Of course, but I have to say something important here.

0

509.707 - 535.933 Yoshua Bengio

this is a case where previous generations of scientists have talked about a notion called the precautionary principle. So what it means is that if you're doing something, say a scientific experiment, and it could turn out really, really bad, like people could die, some catastrophe could happen, then you should not do it. For the same reason,

0

537.635 - 567.388 Yoshua Bengio

There are experiments that scientists are not doing right now. We're not playing with the atmosphere to try to fix climate change because we might create more harm than actually fixing the problem. We are not creating new forms of life that could destroy us all, even though it's something that is now conceived by biologists because the risks are so huge. But in AI...

0

568.972 - 593.239 Yoshua Bengio

It isn't what's currently happening. We're taking crazy risks. But the important point here is that even if it was only a 1% probability, let's say, just to give a number, even that would be unbearable, would be unacceptable. Like a 1% probability that our world disappears, that humanity disappears, or that a worldwide dictator takes over thanks to AI.

593.9 - 609.607 Yoshua Bengio

These sorts of scenarios are so catastrophic. that even if it was 0.1%, it would still be unbearable. And in many polls, for example, of machine learning researchers, the people who are building these things, the numbers are much higher.

609.847 - 619.279 Yoshua Bengio

We're talking more like 10% or something of that order, which means we should be just paying a whole lot more attention to this than we currently are as a society.

620.862 - 639.865 Steven Bartlett

There's been lots of predictions over the centuries about how certain technologies or new inventions would cause some kind of existential threat to all of us. So a lot of people would rebuttal the risks here and say, this is just another example of change happening and people being uncertain. So they predict the worst and then everybody's fine.

641.387 - 648.236 Steven Bartlett

Why is that not a valid argument in this case, in your view? Why is that underestimating the potential of AI? There are two aspects to this.

Comments

There are no comments yet.

Please log in to write the first comment.