Max Tegmark
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
I'm quite confident that this can be done so we can reap all the benefits, but we cannot do it as quickly as this out-of-control express train we are on now is going to get the AGI.
That's why we need a little more time, I feel.
Well, what a lot of safety researchers have been saying for many years is that the most dangerous things you can do with an AI is, first of all, teach it to write code.
Because that's the first step towards recursive self-improvement, which can take it from AGI to much higher levels.
Okay.
Oops, we've done that.
And another thing, high risk is connected to the internet.
Let it go to websites, download stuff on its own, talk to people.
Oops, we've done that already.
You know, Eliezer Yudkowsky, you said you interviewed him recently, right?
Yes, yes.
He had this tweet recently, which gave me one of the best laughs in a while, where he was like, hey, people used to make fun of me and say, you're so stupid, Eliezer, because you're saying...
You have to worry.
Obviously, developers, once they get to really strong AI, the first thing you're going to do is never connect it to the internet, keep it in a box where you can really study it.
So he had written it in the meme form, so it was like, then, and then that, and then now.
LOL, let's make a chatbot.
And the third thing, Stuart Russell, you know,
amazing AI researcher, he has argued for a while that we should never teach AI anything about humans.
Above all, we should never let it learn about human psychology and how you manipulate humans.
That's the most dangerous kind of knowledge you can give it.