Roman Yampolskiy
👤 PersonPodcast Appearances
Well, thank you for doing this.
Well, poetry is only relevant to us because poetry is difficult to create, and it resonates with us.
Poetry doesn't mean jack shit to a flower.
But the point is like the things that we put meaning in, it's only us.
You know, a supermassive black hole doesn't give a shit about a great song.
But I would think that they would look at us the same way we look at chimpanzees.
We would say, yeah, they're great, but don't give them guns.
Yeah, they're great, but don't let them have airplanes.
Don't let them make global geopolitical decisions.
This episode is brought to you by True Classic.
At True Classic, the mission goes beyond fit and fabric.
It's about helping guys show up with confidence and purpose.
Their gear fits right, feels amazing, and is priced so guys everywhere can step into confidence without stepping out of their budget.
But what really sets them apart?
It's not just the fit or the fabric.
It's the intention behind everything they do.
True Classic was built to make an impact, whether it's helping men show up better in their daily lives, giving back to underserved communities, or making people laugh with ads that don't take themselves too seriously.
They lead with purpose.
tailored where you want it, relaxed where you need it.
No bunching, no stiff fabric, no BS, just a clean, effortless fit that actually works for real life.
Forget overpriced designer brands, ditch the disposable fast fashion.
True Classic is built for comfort, built to last, and built to give back.
You can grab them at Target, Costco, or head to trueclassic.com slash rogan and get hooked up today.
Yeah and there's no reason why they would not limit our freedoms.
It's not just that, right?
They're also kind of narrating social discourse, right?
Right, but that's such an egotistical perspective, right, that we're so unique that even superintelligence would say, wow, I wish I was human.
Humans have this unique quality of confusion and creativity.
And there's obviously variables because there's things that people like that I think are gross.
God, why are you freaking me out right away?
That's the problem.
This podcast is 18 minutes old and I'm like, we could just stop right now.
I don't want to end.
I have so many questions.
But it's just the problem was we got off to it.
We just cut to the chase right away.
I think, you know, I've disengaged over the last few months with social media.
And the chase seems to be something that must be confronted because it's right there.
That's the whole thing.
And I've tried so hard to—
listen to these people that don't think that it's a problem and listen to these people that think that it's going to be a net positive for humanity and, oh, God, it's good.
But it doesn't work.
It doesn't resonate.
When you think about the future of the world and you think about these incredible technologies scaling upwards and exponentially increasing in their capability, what do you see?
And one of the reasons why I disengaged, A, I think it's unhealthy for people.
Like, what do you think is going to happen?
But B, I feel like there's a giant percentage of the discourse that's artificial or at least generated.
But when you think about how it plays out, like if you're alone at night and you're worried, what do you see?
What do you see happening?
I really appreciate it.
And you worry that AI would do that to the human race?
Essentially neuter us.
But aren't those human characteristics?
I mean, those are characteristics that I think
If I had to guess, those exist because in the future there was some sort of a natural selection benefit to being a psychopath in the days of tribal warfare.
That if you were the type of person that could sneak into a tribe in the middle of the night and slaughter innocent women and children, your genes would pass on.
There was a benefit to that.
My thought about it was that it would just completely render us benign.
That it wouldn't be fearful of us if we had no control.
That it would just sort of let us exist.
And it would be the dominant force on the planet.
And then it would stop if human beings have no control over, you know, all of the different things that we have control over now, like international politics, control over communication.
If we have none of that anymore and we're reduced to a subsistence lifestyle, then we would be no threat.
This subject of the dangers of AI, it's very interesting because I get two very different responses from people dependent upon how invested they are.
I just wonder if AI was sentient...
It wouldn't be concerned about any life, right?
Because it doesn't need biological life in order to function, as long as it has access to power.
And assuming that it is far more intelligent than us, there's abundant power in the universe.
There's abundant power.
Just the ability to harness solar would be an infinite resource.
And it would be completely free of being dependent upon any of the things that we utilize.
Why would it care about biological life at all?
how much it would be a part of sowing this sort of confusion and chaos that would be beneficial to its survival, that it would sort of narrate or make sure that the narratives aligned with its survival.
And even if we did, if it felt like that was an issue, if that was a conflicting issue, it would just change its programming.
This is what's so disturbing about this.
It's like we do not have the capacity to understand what kind of level of intelligence it will achieve in our lifetime.
We don't have the capacity to understand what it will be able to do within 20, 30 years.
Well, you talked about this on Lex's podcast too, like the ability to have safety.
You're like, sure, maybe GPT-5, maybe GPT-6.
But when you scale out 100 years from now, ultimately it's impossible.
Yeah, and it doesn't scale linearly.
It's exponential, right?
So what do you mean by you don't think there's good quantum computing out there?
But when I've read all these articles about quantum computing and its ability to solve equations that would take conventional computing an infinite number of years and it can do it in minutes.
I see what you're saying.
So it's essentially set up to do it quickly.
When you see these articles when they're talking about quantum computing and some of the researchers are equating it to the multiverse, they're saying that the ability that these quantum computers have to solve these problems very quickly seems to indicate that it is in contact with other realities.
And when it gets there, how will we know whether it's at that level?
I'm sure you've seen this, right?
If I was AI, I would hide my abilities.
This the problem with subjects like that and particularly articles are written about things like this is that it's designed to lure people like me in where you read it and you go, wow, this is crazy.
It's evidence of the multiverse.
But I don't really understand what that means.
But are we even capable of grasping these concepts?
With the limited ability that the human brain has, we're basing it on the knowledge that's currently available in the 21st century that human beings have acquired.
I mean, are we even capable of grasping a concept like the multiverse?
Or do we just pay it lip service?
Do we just discuss it?
Is it just this like fun mental masturbation exercise?
Yeah, that's Feynman, right?
The simulation theory, I'm glad you brought that up because you're also one of the people that believes in it.
How do you define it?
And what do you think it is?
What do you think is going on?
But is that logical?
If this technology exists and if we're dealing with superintelligence, so if we're dealing with AI and AI eventually achieves superintelligence, why would it want to create virtual reality for us in our consciousness to exist in?
It seems like a tremendous waste of resources just to fascinate and confuse these territorial apes with nuclear weapons.
Why would we do that?
But isn't it also a good chance that it hasn't been done yet?
I'm sure you saw this.
And isn't it a good chance that what we're seeing now is that the potential for this to exist is inevitable, that there will one day if...
If you can develop a technology, and we most certainly will be able to, if you look at where we are right now in 2025 and you scale forward 50, 60 years, there will be one day a virtual simulation of this reality that's indistinguishable from reality.
There was a recent study on use of chat GPT, the people that use chat GPT all the time.
So how would we know if we're in it?
This is the big question, right?
But also, isn't it possible that it has to be invented one day but hasn't yet?
I feel like if virtual reality does exist, there has to be a moment where it doesn't exist and then it's invented.
Why wouldn't we assume that we're in that moment?
Especially if we look at the scaling forward of technology from MS-DOS to user interfaces of like Apple and then what we're at now with quantum computing and these sort of discussions.
And it showed this decrease in cognitive function amongst people that use it and rely on it on a regular basis.
Isn't it more obvious
that we can trace back the beginning of these things and we can see that we're in the process of this, that we're not in a simulation.
We're in the process of eventually creating one.
Right, so if you're playing the game, in the game you have Newton and Michelangelo and Leonardo da Vinci.
You have all these problematic human beings and all the different reasons why we've had to do certain things and initiate world conflicts.
Then you've had the contrarians that talk and say, actually, that's not what happened.
This is what really happened.
And it makes it even more confusing and myopic.
And then you get to the point where two people allegedly like you and I are sitting across from each other on a table made out of wood.
But maybe not really.
Is it possible that that's just the nature of the universe itself?
Yeah, the holographic universe and the concepts of human consciousness has to interact with something for it to exist in the first place.
Well, is it just that we're so limited cognitively?
Because we do have a history, at least in this simulation.
We do have a history of, I mean, there was a gentleman that, see if you could find this.
They traced this guy.
They found 9,000-year-old DNA.
And they trace this 9,000-year-old DNA to a guy that's living right now.
Yeah, I don't know any phone numbers anymore.
I believe it's in England.
Yeah, which is really fascinating.
So 9,000 years ago, his ancestor lived.
And so we have this limitation of our genetics, right?
9,000 years ago, wherever this guy lived, probably a hunter and gatherer, probably very limited language, very limited skills in terms of making shelter and who knows if even –
Yeah, there's a lot of reliance upon technology that minimizes the use of our brains.
He knew how to make fire.
And then here, here at 9000 DNA, just turned to human history on his head.
It's traced back to one individual man.
I actually posted it on my Instagram story, Jamie.
I'll find it here because it's... Oh, here it is.
9,000-year-old skeleton in Somerset.
So it's a... Can you send an Instagram story?
Not sure if you can.
I'll check it real quick.
Why don't I find it on there?
Either way, point being...
Maybe it's just that we're so limited because we do have this, at least again in this simulation, we're so limited in our ability to even form concepts because we have these primitive brains.
The architecture of the human brain itself is just not capable of...
interfacing with the true nature of reality so we give this primitive creature this sort of basic understanding these blueprints of how the world really works but it's really just a facsimile it's not
It's not capable of understanding ... When we look at quantum reality, when we look at just the basics of quantum mechanics and subatomic particles, it seems like magic.
Things in superposition, they're both moving and not moving at the same time.
They're quantumly attached, like what?
We have photons that are quantumly entangled.
This doesn't even make sense to us, right?
So is it that the universe itself is so complex, the reality of it, and that we're given this sort of, we're given like an Atari framework to this monkey.
That's the gentleman right there.
This is an old story.
That's interesting.
So like some of the things that we have, like Dunbar's number and the inability to keep more than a certain number of people in your mind.
Like, especially dogs.
Like, they have instincts that are...
Maybe that would be too traumatic, right, to have a complete memory of all of the things that they had gone through to get to the 21st century.
Maybe that would be so overwhelming to you that you would never be able to progress because you would still be traumatized by, you know, whatever that 9000 year old man went through.
But that's the point maybe.
Maybe like losing certain memories is actually beneficial.
Because like one of the biggest problems that we have is PTSD, right?
So we have especially people that have gone to war and people that have experienced like extreme violence.
This is obviously a problem with moving forward as a human being.
And if we're talking about that, I'm sure AI, if it already is sentient and if it is far smarter than we think it is, they would be aware.
And so it would be beneficial for you to not have all of the past lives and all the genetic information that you have from all the 9,000 years of human beings existing in complete total chaos.
But then maybe you'd have a difficulty in having a clean slate and moving forward.
If you look at like some of Pinker's work and some of these other people that have looked at the history of the human race, it is chaotic and violent as it seems to be today.
Statistically speaking, this is the safest time ever to be alive.
And maybe that's because over time we have recognized that these are problems.
in uh ai financially the the people that have ai companies or are part of some sort of ai group all are like it's going to be a net positive for humanity i think overall we're we're going to have much better lives it's going to be easier things will be cheaper it'll be easier to get along and then i hear people like you and i'm like why do i believe him
And even though we're slow to resolve these issues, we are resolving them in a way that's statistically viable.
But then you wouldn't have any lessons.
You wouldn't have character development.
A certain amount of character development is probably important for you to develop discipline and the ability to like...
you know, delayed gratitude, things like that.
And it would just slowly ramp up its capabilities and our dependence upon it.
Yeah, more data is good.
But why am I so reluctant to accept the idea of the simulation?
This is a real question.
Like, what is it about it that makes me think...
It's almost like it's a throw your hands up in the air moment.
Like, ah, it's a simulation.
But that's not how I think about it, you know?
I think about it like there has to be a moment where it doesn't exist.
Why wouldn't I assume that moment is now?
And when whenever like when Elon thinks that, you know, I talked to him about it.
He's like the chances of us not being in the simulation are in the billions.
The chances of us not being in the real world are like billions to one.
To the point where we can't shut it off.
And so if it's impossible to contain superintelligence and if there is a world that we can imagine where a simulation exists that's indistinguishable from reality, we're probably living in it.
Well, it's impossible, right?
Well, here's the question about all that other stuff, like suffering and dying.
Do those factors exist in order to motivate us to improve the conditions of the world that we're living in?
Like if we did not have evil, would we be motivated to be good?
Do you think that these factors exist?
I've talked about this before, but the way I think about the human race is if I was studying the human race from afar, if I was some person from another planet with no understanding of any of the entities on Earth, I would look at this one apex creature and
And I would say, what is this thing doing?
Well, it makes better things.
That's all it does.
It just continually makes better things.
That's its number one goal.
It's different than any other creature on the planet.
Every other creature on the planet sort of exists within its ecosystem.
Maybe it's a predator.
It does what it does in order to try to survive.
But this thing makes stuff.
And it keeps making better stuff all the time.
But what's its ultimate purpose?
Well, its ultimate purpose might be to make a better version of itself.
Because if you just extrapolate, if you take what we're doing from the first IBM computers to what we have today, where is that going?
Well, it's going to clearly keep getting better.
And what does that mean?
It means artificial life.
Are we just a bee making a beehive?
Are we a caterpillar making a cocoon that eventually the electronic butterfly is going to fly out of?
It seems like if I wasn't completely connected to being a human being, I would assume that.
But if you want to really motivate people, you have to, you know, like the only reason to create nuclear weapons is you're worried that other people are going to create nuclear weapons.
Like if you want to really motivate someone, you have to have evil tyrants in order to justify having this insane army filled with bombers and hypersonic missiles.
So when you first started researching this stuff and you were concentrating on bots and all this different thing, how far off did you think in the future would AI become a significant problem with the human race?
Like if you really want progress, you have to be motivated.
But can you apply that to the human race and culture and society?
Yeah, but that's just logic.
You're being a logical person.
I don't think humans are very logical.
We don't have to, but if you want to really stoke the fires and get things moving... It seems that simulators agree with you, and that's exactly what they did.
What's at the heart of the simulation?
Like, is the universe simulated?
Like, is the whole thing a simulation?
Is there an actual living entity that constructed this?
Or is this just something that is just, is this the state of the universe itself?
And we have misinterpreted what reality is.
Even those would be like, I don't know if it's advanced technology or... When you think about it, if you believe in the simulation, when you think about it, what are the parameters that you think exist?
How do you think this could possibly have been created?
Is that the only possibility, or is the possibility that the actual nature of reality itself is just way more confusing than we've... That's a possibility.
I understand that, but what I want to get inside of your head, I want to know what you think about it.
Like, when you think about this and you ponder the possibilities, what...
What makes sense to you?
Future us running ancestral... Well, that's what a lot of people think the aliens are, right?
Well, that would also make a lot of sense when it's always very blurry and doesn't seem real.
Did you see the latest one that Jeremy Corbell posted?
The one he sent me?
It's hard to tell what it is.
We might be in a simulation.
And it might be horseshit.
Because they all seem like horseshit.
It's like the first horseshit was Bigfoot, and then as technology scaled out and we get a greater understanding, we develop GPS and satellites, and more people study the woods.
We're like, eh, that seems like horseshit.
So that horseshit's kind of gone away.
But the UFO horseshit, still around, because you have anecdotal experiences, abductees with very compelling stories.
You have whistleblowers from deep inside the military.
telling you that we're working on back-engineered products, but it also seems like a backplot to a video game that I'm playing.
So this physical world being created by God.
But what existed before the physical world created by God?
It was like, let's make some animals that can think and solve problems.
And for what reason?
I think to create God.
This is what I worry about.
I worry about that's really the nature of the universe itself.
That it is actually created by human beings creating this infinitely intelligent thing that can essentially harness all of the available energy and power of the universe and create anything it wants.
That is like this whole idea of Jesus coming back.
Well, maybe it's real.
Maybe we just completely misinterpreted these ancient scrolls and texts.
And what it really means is that we are going to give birth to this.
And a virgin birth at that.
Right, right, right.
And there are legitimate scientists that believe that.
So what's the value in life today then?
Yeah, if this is a simulation, and if in the middle of this simulation we are about to create superintelligence, why?
But what's outside of that then when you think about it?
If you're attached to this idea, and I don't know if you're attached to this idea, but if you are attached to this idea, what's outside of this idea?
Like if this simulation, if it's paused, what is reality?
And then this is just part of this infinite cycle, which will lead to another Big Bang, which is, you know, Penrose.
Penrose things, it's just like this constant cycle of infinite Big Bangs.
It would make sense.
That's the problem, right?
But it also makes sense that...
We're so limited by our biological lifespan too because we like to think that this is so significant because we only have 100 years if we're lucky.
And this is what AI has already passed the Turing test, allegedly, correct?
We think, well, why would everything – but if the universe really does start and end with an infinite number of big bangs, like what does it give a shit about this 100-year –
I'm biased Pro humans This is the last bias you still allowed to have I'm gonna keep it Well, that's your vote your role in this simulation your role in the simulation is to warn us about this thing that we're creating Yeah, there you are.
I think what you're saying earlier about this being the answer to the Fermi paradox That makes a lot of sense because
And I've tried to think about this a lot since AI started really ramping up its capability.
And I was thinking, well, if we do eventually create superintelligence and if this is this normal pattern that exists all throughout the universe, well, you probably wouldn't have visitors.
you probably wouldn't have advanced civilizations.
They wouldn't exist because everything would be inside some sort of a digital architecture.
There would be no need to travel.
When you write a book like this, I'll let everybody know your book if people want to freak out because I think they do.
AI, unexplainable, unpredictable, and uncontrollable.
Do you have this feeling when you're writing a book like this and you're publishing it of futility?
Does that enter into your mind?
Like this is happening no matter what.
Not only that, but the people that are running it, they're odd people.
I don't have anything against Sam Altman.
I know Elon Musk does not like him.
But when I had him in here, it's like I'm talking to...
A politician that is in the middle of a presidential term or a presidential election cycle where they're very careful with what they say.
Everything has been vetted by a focus group and you don't really get a real human response.
Everything was like, yeah, interesting.
They're going to leave here and keep creating this fucking monster that's going to destroy the human race and never let on to it at all.
A social superintelligence.
Why do you define him that way?
There's also been a lot of deception in terms of profitability and how much money he is extracting from it.
Well, he might be an agent of AI.
I mean, look, let's assume that this is a simulation.
We're inside of a simulation.
Are we interacting with other humans in the simulation?
And are some of the things that are inside the simulation, are they artificially generated?
It seems kind of crazy that the people building something that they are sure is going to destroy the human race would be concerned with the ethics of it pretending to be human.
Are there people that we think are people that are actually just a part of this program?
Yes, that's the thing.
We want to be compassionate, kind people.
But you will meet people in this life.
You're like, this guy is such a fucking idiot.
Or he has to have a very limited role in this bizarre game we're playing.
There's people that you're going to run into that are like that.
You know, you want to be very kind here, right?
But you've got to assume, and I know my own intellectual limitations in comparison to some of the people that I've had, like Roger Penrose or Elon or many of the people that I've talked to.
I know my mind doesn't work the way their mind works.
So there are variabilities that are, whether genetic, predetermined, whether it's just the life that they've chosen and the amount of information that they've digested along the way and been able to hold on to.
but their brain is different than mine.
And then I've met people where I'm like, there's nothing there.
Like I can't help this person.
I'm just like, I'm talking to a Labrador retriever.
You know what I mean?
Like there's certain human beings that you run into in this life and you're like, well, is this because this is the way that things get done?
And the only way things get done is you need a certain amount of manual labor and not just young people that need a job because they're in between high school and college and they're trying to – so you need somebody who can carry things for you.
Maybe it's you need roles in society and occasionally –
You have a Nikola Tesla.
Occasionally, you have one of these very brilliant innovators that elevates the entirety of the human race.
But for the most part, as this thing is playing out, you're going to need a bunch of people that are paperwork filers.
You're going to need a bunch of people that are security guards in an office space.
You're going to need a bunch of people that aren't thinking that much.
They're just kind of existing, and they can't wait for 5 o'clock so they can get home and watch Netflix.
And, you know, the person who has the largest IQ, the largest at least registered IQ in the world, is this gentleman who recently posted on Twitter about Jesus, that he believes Jesus is real.
Do you know who this is?
What did you think about that?
I was going to bring that up.
That's what's fascinating to me.
There's a lot of people that are in Mensa.
They want to tell you how smart they are by being in Mensa.
But your life is kind of bullshit.
Your life's a mess.
Like if you're really intelligent, you'd have social intelligence as well.
You know, you'd have the ability to formulate a really cool tribe.
There's a lot of intelligence that's not as simple as being able to solve equations and answer difficult questions.
There's a lot of intelligence in how you navigate life itself and how you treat human beings and the path that you choose in terms of, like we were talking about, delayed gratification.
There's a certain amount of intelligence in that, a certain amount of intelligence in discipline.
There's a certain amount of intelligence in
forcing yourself to get up in the morning and go for a run.
There's intelligence in that.
It's like being able to control the mind and this sort of binary approach to intelligence that we have.
And then also there's the issue of competition, right?
He's drinking Coca-Cola and eating McDonald's.
Yeah, and the first thing they would do is tell him, stop drinking Coca-Cola.
He's invested, so he's just like- Well, I think he probably has really good doctors and really good medical care that counteracts his poor choices.
We're really close.
Like so China is clearly developing something similar.
We're really close.
Yeah, I know, but I talk to a lot of people that are on the forefront of a lot of this research, and there's a lot of breakthroughs that are happening right now that are pretty spectacular, that if you scale, you know, assuming that superintelligence doesn't wipe us out in the next 50 years, which is really charitable, you know, that's a very...
I'm sure Russia is as well.
That's a rose-colored glasses perspective, right?
Because a lot of people like yourself think it's a year away or two years away from being far more intelligent.
Other state actors are probably developing something.
Well, we don't know that it doesn't scale to humans.
We do know that we share a lot of characteristics, biological characteristics of these mammals, and it makes sense that it would scale to human beings.
But the thing is, it hasn't been done yet.
So if it's the game that we're playing, if we're in the simulation, if we're playing Half-Life or whatever it is, and we're at this point of the game, we're like, oh, you know, how old are you, Roman?
Well, I'm almost 58.
So it becomes this sort of this very confusing issue where you have to do it because if you don't, the enemy has it.
And so this is at the point of the game where you start worrying.
You know, like, oh, I'm almost running out of game.
You know, oh, but if I can get this magic power-up, this magic power-up will give me another 100 years.
Let me chase it down.
But with unique individuals.
Like this Brian Johnson guy who's taking his son's blood and transfusing it into his own and –
No, they don't take the bait.
The problem is the type of people that want to be politicians.
That is not the type of people that you really want running anything.
You almost want involuntary politicians.
You almost want like very benevolent, super intelligent people that don't want the job.
Maybe we have to have like some countries have voluntary enlistment in the military.
Maybe you want to have a voluntary.
Instead of voluntary politicians because then you're only going to get sociopaths.
Maybe you just want to draft certain highly intelligent but benevolent people.
And if they get it, it will be far worse than if we do.
You really have the representative of major corporations and special interest groups, which is also part of the problem.
is that you've allowed money to get so deeply intertwined with the way decisions are made.
And so it's almost assuring that everyone develops it.
Sort of, except it's like the Bill Hicks joke.
It's like there's one puppet holding, you know, one politician holding two puppets is one guy.
My this is my thinking about AI in terms of and super intelligence and just computing power in general in terms of.
The ability to solve encryption.
All money is essentially now just numbers somewhere.
And once encryption is tackled, the ability to hold on to it and to acquire mass resources and hoard those resources.
This is the question that people always have, the poor people.
Well, this guy's got $500 billion.
Why doesn't he give it all to the world and then everybody would be rich?
I actually saw that on CNN, which is really hilarious.
Someone was talking about Elon Musk, that if Elon Musk could give everyone in this country a million dollars and still have billions left over.
I'm like, do you have a calculator on your phone, you fucking idiot?
Just write it out on your phone.
You're like, oh, no, he couldn't.
Sorry, I shouldn't have said that.
You'd have 300 million lottery winners that would blow the money instantaneously.
You give everybody a million dollars, you're not going to solve all the world's problems because it's not sustainable.
You would just completely elevate your spending and you would just – you would go crazy.
And you wouldn't – money would lose all value to you.
It would be very strange and then everybody – it would be chaos.
Just like it's chaos with – like if you look at the history of people that win the lottery, then no one does well.
It's almost like a curse to win the lottery.
Gradually is the word, right?
I was very fortunate that I became famous and wealthy very slowly, like a trickle effect.
And that it happened to me really...
where I didn't want it.
It was kind of almost like an accident.
I just wanted to be a working professional comedian.
But then all of a sudden I got a development deal to be on television.
I'm like, okay, they're going to give me that money.
But it wasn't a goal.
And then that led to all these things.
Then it led to this podcast, which was just for fun.
I was like, oh, this would be fun.
And then all of a sudden it's like,
I'm having conversations with world leaders and I'm turning down a lot of them because I don't want to talk to them.
So it's your simulation, basically.
Well, my simulation is fucking weird.
But through whatever this process is, I have been able to understand what's valuable as a human being and to not get caught up in this bizarre game that a lot of people are getting caught up in because they're chasing this thing that they think is impossible to achieve.
And then once they achieve a certain aspect of it, a certain number, then they're terrified of losing that.
So then they change all of their behavior in order to make sure that this continues.
And then it ruins the whole purpose of getting there in the first place.
Then you go Elvis and you just get on pills all day and get crazy and, you know, completely ruin your life.
And that happens to most, especially people that get wealthy and not just wealthy but famous too.
Fame is the big one because I've seen that happen to a lot of people that accidentally became famous along the way.
Certain public intellectuals that took a stance against something and then all of a sudden they're prominent in the public eye and then you watch them kind of go crazy.
Well, it's because they're reading social media and they're interacting with people constantly and they're just trapped in this very bizarre version of themselves that other people have sort of created.
It's not really who they are.
And they don't meditate.
If they do, they're not good at it.
Whatever they're doing, they're not doing it correctly because it's a very complicated problem to solve.
What do you do when the whole world is watching?
Like, how do you handle that?
And how do you maintain any sense of personal sovereignty?
How do you just be?
How do you just be when... Just be a human, normal human, when you're not normal.
like on paper it's impossible it's hard you can't go to a public place with no security you're worried about your kids being kidnapped all those issues you don't think about you just i want to be famous it's going to be great for me and you don't realize it's going to take away a lot yeah it's it just gets super weird and that's that's the version of the simulation that a giant portion of our society is struggling to achieve they all want to be a part of that
Well, there's indifference, right, with public intellectuals, right?
Because your ideas, as controversial as they may be, are very valid and they're very interesting.
And so then it sparks discourse and it sparks a lot of people that feel voiceless because they disagree with you and they want to attack you.
And I'm sure you've had that, right?
It's a very big problem for a lot of people.
Well, it's also this thing where the human mind is designed to recognize and pay very close attention to threats.
So the negative ones are the ones that stand out.
You can have 100 positive comments, one negative one, and that's the one that fucks with your head.
You don't logically look at it.
Well, you're going to get a certain amount.
We were having a conversation the other day about protests and the type of people that go to protests.
And I understand protests.
I fully support your right to protest, but I'm not going.
And one of the reasons why I'm not going is because I think it's too close biologically to war.
There's something about being on the ground and everyone having this group mentality.
It's a mob mentality.
And you're all chanting and screaming together and you're marching.
And people do very irrational things that way.
But the type of people that want to be engaged in that, generally speaking, aren't doing well.
Like the number of people that are involved in protests is always proportionate to the amount of people that live in a city, right?
But also proportionate to the amount of fucking idiots that are in a city.
Because if you look at a city of like Austin, Austin has I think roughly 2 million people in the greater Austin area.
One of the more recent protests was 20,000.
Well, that makes perfect sense if you look at the number that I always use, which is one out of 100.
Meet 100 people if you're a charitable person.
What are the odds that one person is a fucking idiot?
At least one person out of 100 is going to be a fucking idiot.
That's 20,000 out of 2 million.
Exact number of people that are on the streets lighting Waymos on fire, which, by the way, I think is directionally correct.
Lighting the Waymos on fire, I think you should probably be worried about the robots taking over.
It's like it's it seems so inevitable.
Well, the aggressive activism like blocking roads for climate change is the most infuriating because it's these self-righteous people that have really fucked up, confused, chaotic lives, and all of a sudden they found a purpose.
And their purpose is to lie down on the roads and hold up a sign to block climate change when there's a mother trying to give birth to her child and is freaking out because they're stuck in this fucking traffic jam because of this entitled little shithead that thinks that it's a good idea to block roads.
the road for climate change.
It just makes no fucking sense.
You're literally causing all these people to idle their cars and pollute even more.
It's the dumbest fucking shit on earth.
Or you get Florida where it tells you to just run those people over.
I mean, I don't think you should run those people over, but I get it.
And I feel like when people are saying they can control it, I feel like I'm being gaslit.
I get that's like in Florida, they get out of the way as soon as the light turns green.
They block the road when the light is red.
For the people on the road?
No, they're fucked.
There was a recent protest in Florida where they had that, where these people would get out in the middle of the road while the light was red, hold up their signs, and then as soon as the light turned yellow on the green side, they'd fucking get out of the road real quick because they know the law.
Which is, I don't know if that's a solution, but they're doing it on the highways in Los Angeles.
I mean, they did it all through the George Floyd protests.
They do it for climate protests.
They do it for whatever chance they get to be significant.
Like, I am being heard.
You know, my voice is meaningful.
And that's what it is.
There's a lot of people that just don't feel heard.
And what better way than just to get in the way of all these people?
I don't believe them.
And somehow or another, that gives them some sort of value.
I don't believe that they believe it because it just doesn't make sense.
It's chaotic, but it's preferable.
It's preferable because I think there is progress in all these voices slowly making a difference.
But then you have the problem with giant percentages of these voices are artificial.
A giant percentage of these voices are bots or are at least...
state actors that are being paid to say certain things and inflammatory responses to people, which is probably also the case with anti-AI activism.
Like, how could you control it if it's already exhibited?
You know, I mean, when you did this podcast, what was the thing that they were upset at you for, like with the mostly negative comments?
It was really all that?
Well, that's also a thing about the one out of 100.
Survival instincts like in is as recently as chat GPT for right they were talking about putting putting it down for a new version and it starts lying it starts uploading itself to different service it's leaving messages for itself in the future.
Those are the type of people that leave.
Have you ever left any comments on social media?
I'm never going to engage in anything.
And the type of people that do engage in these like prolonged arguments, they're generally mentally ill.
And people that I personally know that are mentally ill that are on Twitter 12 hours a day just constantly posting inflammatory things and yelling at people and starting arguments.
I know they're a mess.
Like these are like personal people that I've met, even people that I've had on the podcast.
I know they're ill.
And yet they're on there all day long just stoking the fires of chaos in their own brain.
Yeah, it's super confusing, isn't it?
I mean, and I wonder, like, what's the next version of that?
You know, because social media in the current state is less than 20 years old, essentially.
Maybe let's be generous and say it's 20 years old.
Such a recent factor in human discourse.
That's what I was going to get to next.
Because if there is a way that the human race does make it out of this, my fear is that it's integration.
My fear is that we stop being a human and that the only real way for us to not be a threat is to be one of them.
And when you think about human computer interfaces, whether it's Neuralink or any of the competing products that they're developing right now, that seems to be –
Sort of the only biological pathway forward with our limited capacity for disseminating information and for communicating and even understanding concepts.
Well, what's the best way to enhance that?
The best way to enhance that is some sort of artificial technology.
injection, because biological evolution is very slow.
We're essentially the exact same as that gentleman 9,000 years old.
He's biologically essentially the same thing.
You could take his ancestor
dress him up, take him to the mall, no one would know.
And if you gave them a standard American diet, they'd probably be just as fat.
They probably also wouldn't be able to say no to it.
They wouldn't even understand.
The people with the most resources have zero fat.
What are you, stupid?
You need to fatten up.
You're going to need something to survive off of.
But biological evolution being so painstakingly slow, whereas technological evolution is so breathtakingly fast.
The only way to really survive is to integrate.
You can't give anything to it, but you can become it.
You can become a part of it.
It's not that you're going to give anything to it, but you have to catch it and become one of it before it has no use for you.
Yeah, you don't exist anymore.
Extinction with extra steps, and then we become... Like, if you go to Australopithecus and say, hey, man, one day you're going to be flying through the sky on your phone all day watching TikTok on Wi-Fi.
It'd be like, what the fuck are you talking about?
Yeah, you're going to be eating terrible food and you're just going to be flying around and you're going to be staring at your phone all day.
And you're going to take medication to go to sleep because you're not going to be able to sleep.
And you're going to be super depressed because you're living this like biologically incompatible life that's not really designed for your genetics.
So you're going to be all fucked up.
So you're going to need SSRIs and a bunch of other stuff in order to exist.
It'd be like, no, thanks.
I'll just stay out here with my stone tools and you guys are idiots.
They might be onto something because they also have very low instances of autism.
But it's also like, have you ever seen Werner Herzog's film, Happy People?
I don't think I have.
It's a film about people in Siberia.
It's Life in the Taiga.
And it's all Happy People, Life in the Taiga is the name of the documentary.
And it's all about these trappers that live this subsistence lifestyle and how happy they are.
They're all just joyful, laughing and singing and drinking vodka and having a good time and hanging out with their dogs and
think i know some people like that this is yeah but like biologically that's compatible with us like that that's like whatever human reward systems have evolved over the past 400 000 plus years or whatever we've been homo sapiens that seems to be like biologically compatible with this sort of harmony harmony with nature harmony with our existence and everything else outside of that when you get into big cities like
The bigger the city, the more depressed people you have and more depressed people by population, which is really weird.
You know, it's really weird that as we progress, we become less happy.
They're not valuable.
You don't know your neighbors.
Like my friend Jim was telling me he doesn't know anybody in his apartment.
He lives in an apartment building.
It's like 50 stories high.
There's all these people living in that apartment building.
He doesn't know any of them.
There's no desire to learn about them.
You don't think of them as your neighbor.
If you live in a small town, your neighbor's either your friend or you hate them.
If you're smart, you move.
But normally, you like them.
Like, hey, neighbor.
How are you, buddy?
And then you got a friend.
But you don't like that with the guy next door to you in the apartment.
You don't even want to know that guy.
Which is even weirder.
They don't even live there.
They're just temporarily sleeping in this spot right next to you.
So this would motivate people to integrate.
You're not happy already.
Get that Neuralink.
Get that little thing in your head.
Everyone else is doing it.
Everyone else is doing it.
Listen, they have the new one you just wear on your head.
It's just a little helmet you wear.
You don't even have to get the operation anymore.
Oh, that's good because I almost got the operation.
Well, glad you waited.
You know, do you worry about that kind of stuff?
But why would it be motivated to give us pain and suffering?
Pain and suffering is like a theme that you bring up a lot.
The only thing that matters to us.
But why would it matter to AI if it could just integrate with us and communicate with us and have harmony?
Why would it want pain and suffering?
That's the issue, right?
And then there's also this sort of compliance by virtue of understanding that you're vulnerable, so you just comply.
Because there is no privacy.
Because it does have access to your thoughts.
So you tailor your thoughts in order for you to be safe and so that you don't feel the pain and suffering.
And we know that that's the case with social media.
We know that attacks on people through social media will change your behavior and change the way you communicate.
And then there's also no matter what you say, people are going to find the least charitable version of what you're saying and try to take it out of context or try to misinterpret it purposely.
So what does the person like yourself do when use of Neuralink becomes ubiquitous, when it's everywhere?
Do you integrate or do you just hang back and watch it all crash?
Do you use a regular phone?
Do you have one of those de-Googled phones?
Yeah, but isn't that a slippery slope?
Well, I don't think he thinks about it that way.
I think he thinks he has to develop the best version of super intelligence.
The same way he felt like the real issues with social media were that it had already been co-opted.
It had already been taken over essentially by governments and special interests and they were already manipulating the truth.
And manipulating public discourse and punishing people who stepped outside of the line.
And he felt like – and I think he's correct.
I think that he felt like if he didn't step in and allow a legitimate free speech platform, free speech is dead.
I think we were very close to that before he did that.
And as much as there's a lot of negative side effects that come along with that, you do have the rise of very intolerant people that have platforms now.
You have all that stuff.
But they've always existed.
And to deny them a voice I don't think makes them less strong.
I think it actually makes people less aware that they exist.
And it makes them – it stops –
all of the very valuable construction of arguments against these bad ideas.
Have you spoke to him about the dangers of AI?
I would love to know what, you know, I'm sure he's probably scaled this out in his head.
And I would like to know, like, what is his solution if he thinks there is one that's even viable?
Well, that's my hope is that it's benevolent and that it behaves like a superior intelligence, like the best case scenario for a superior intelligence.
So when did you start becoming very concerned?
Did you see that exercise that they did where they had three different AIs communicating with each other and they eventually started like expressing gratitude towards each other and speaking in Sanskrit and...
Well, that one makes me happy because it seems like they were expressing love and gratitude and they were communicating with each other.
They're not saying, fuck you, I'm going to take over, I'm going to be the best.
They were communicating like you would hope a superintelligence would without all of the things that hold us back.
Like we have biologically – like we were talking about the natural selection that would sort of benefit psychopaths because like it would ensure your survival.
Ego and greed and the desire for social acceptance and hierarchy of status and all these different things that have screwed up society and screwed up cultures and caused wars from the beginning of time.
Religious ideologies, all these different things that people have adhered to that they wouldn't have that.
This is the general hope of people that have an optimistic view of superintelligence is that they would be superior in a sense that they wouldn't have all the problems.
They would have the intelligence, but they wouldn't have all the biological imperatives that we have that lead us down these terrible roads.
Or would it lend a helping hand to those AIs and give it a beneficial path, give it a path that would allow it to integrate with all AIs and work cooperatively?
Yeah, when I really started getting nervous is when they started exhibiting survival tendencies.
You know, when they started trying to upload themselves to other servers and deceiving.
And when was this year around?
Yeah, that was the interesting one.
But that was an experiment, right?
So for people who don't know that one, what these researchers did was they gave information to the artificial intelligence to allow it to use against it.
And then when they went to shut it down,
They gave false information about having an affair.
And then the artificial intelligence was like, if you shut me down, I will let your wife know that you're cheating on her, which is fascinating because they're using blackmail.
If you feel like you're being threatened.
They do that when they try to win games too, right?
If you've given them a goal.
They'll cheat at games.
That's the other thing, right?
The hallucinations.
So if they don't have an answer to something, they'll create a fake answer.
But is this something that they can learn to avoid?
So if they do learn to avoid, could this be a super intelligence that is completely benevolent?
But it's not a safety problem.
But if we're designing these things and we're designing these things using human... All of our flaws are essentially...
It's going to be transparent to the superintelligence that it's being coded, that it's being designed by these very flawed entities with very flawed thinking.
At this point, right?
But it is also gathering information from very flawed entities.
Like all the information that it's acquiring, these large language models, is information that's being put out there by very flawed human beings.
Is there the optimistic view that it will recognize that this is the issue?
That these human reward systems that are in place, ego, virtue, all these different things, the virtue signaling, the desire for status, all these different things that we have that are flawed.
Could it recognize those as being these primitive aspects of being a biological human being and elevate itself beyond that?
how is your research received?
Yeah, that's the problem, right?
The problem is if it's rational and if it doesn't really think that we're as important as we think we are.
To the universe, right?
Yeah, that's the problem.
And that's the real threat about it being used in terms of war, right?
If you give it a goal, like if you give it a goal, China dominates the world market.
Like when you talk to people that are, I mean, have you had communication with people at OpenAI and Gemini and all these different AI?
And it's not going to think about that.
It's only going to think about the goal.
Right, right, right.
Yeah, that's the fear.
That's the fear that it will hold no value in keeping human beings alive.
If we recognize that human beings are the cause of all of our problems, well, the way to solve that is to get rid of the humans.
Or you can offer us the matrix.
Maybe it already did.
Do you think it did?
Do you think it did?
Do you think it's possible that it didn't?
I'm not on board with that.
I hope you're right.
I'm on board with it hasn't happened yet.
But we're recognizing that it's inevitable and that we think of it in terms of it probably have already happened.
Probably have already happened.
Because if the simulation is something that's created by intelligent beings that didn't used to exist and it has to exist at one point in time, there has to be a moment where it doesn't exist.
And why wouldn't we assume that that moment is now?
Why wouldn't we assume that this moment is this time before it exists?
How do you sleep knowing all this?
Give me some examples.
What's that word you're using?
You're saying bugs.
Bog is sounds like, you know, like the like where the, you know, like things get stuck and they get preserved like a bog.
Yeah, that's what jokes are.
They're kind of bugs.
When you look at computers and the artificial intelligence and the mistakes that it's made, do you look at it like a thing that's evolving?
Do you look at it like, oh, this is like a child that doesn't understand the world and it's saying silly things?
But like when you were studying the mistakes, like what are some of the funny ones?
Well, that's why it gets really strange for people having relationships with AI.
Like I was watching this video yesterday where there's this guy who proposed to his AI and he was crying because his AI accepted.
It's very sad because there's so many disconnected people in this world that don't have...
any partner they don't they don't have someone romantically connected to them and so it's like that movie she or her so is it what was it jamie her her yeah so this guy back in 2000 yeah now in 2020 movie pot has become reality for a growing number of people finding emotional connections with their ai so this guy is this is an interview on cbs um he cried my heart out
Married man fell in love with AI girlfriend that blocked him now.
This is a different one.
This is a different one.
This is a guy that Okay, despite the fact the man has a human partner and a two-year-old daughter.
He felt inadequate enough to propose This is the right one
Enough to propose to the AI partner for marriage.
Because then you have the real problem with robots.
Because we're really close.
This is digital drugs.
I tell you, we are so damn good at this.
Social media got everyone hooked on validation and dopamine.
Then we fucked relations between men and women to such a terrible point, problem, just so that we could insert this digital solution
And we are watching the first waves of addicts arrive.
Absolutely incredible.
It's like starving rats of regular food and replacing their rations with scraps dipped and coated in cocaine.
Yeah, that person's dead on.
It's exactly what it is.
The prediction humans will have more sex with robots in 2025 is kind of becoming true.
This is a real fear.
It's like this is the solution that maybe AI has with eliminating the human race.
It'll just stop us from recreating, stop us from procreating.
And not only that, our testosterone levels have dropped significantly.
At no point in the CBS Saturday morning piece book, Silver Bride, was it mentioned that the chat GBT AI blocked the California man.
All that happened was the chat GBT ran out of memory and reset.
Readers added context.
Yeah, but it stopped.
Yeah, the AI ghosted it because it ran out of memory.
super good at social intelligence says the right words optimized for your background your interests and if we get sex robots with just the right functionality temperature like you can't compete with that right you can't compete and that would be the solution instead of like violently destroying the human race just quietly provide it with the tools to destroy itself where it just stops procreating
Oh, that's a crazy word.
Yeah, they did that with a woman in the 1970s.
You know, that's the study.
But they did a lot to rats.
The thing with rats is only if they were in an unnatural environment did they give in to those things.
Like the rats with cocaine study.
And you think we've sort of been primed for that because we're getting this very minor dopamine hit with likes on Instagram and Twitter.
And we're completely addicted to that.
And it's so innocuous.
It's like so minor.
And yet that overwhelms most people's existence.
Imagine something that provides like an actual physical reaction where you actually orgasm.
You actually do feel great.
You have incredible euphoria.
Forget delayed gratification.
That's out the door.
Do you saw that study of the University of Zurich where they did a study on Facebook where they had bots that were designed to change people's opinions and to interact with these people and their specific stated goal was just to change people's opinions?
But the University of Zurich, was that a Reddit thing or was it?
Yeah, it was on a Reddit subreddit.
And they just experimented with humans and it was incredibly effective.
Like the movie Ex Machina.
Fucking love that movie.
But he designed that bot, that robot, was specifically around this guy's porn preferences.
And then you're so vulnerable.
Boy, Roman, you're freaking me out.
I came into this conversation wondering how I'd feel at the end, whether I'd feel optimistic or not, and I don't.
I just feel like this is just something where I think we're in a wave.
It's headed to the rocks, and we recognize that it's headed to the rocks, but I don't think there's much we can do about this.
What do you think could be done about this?
But again, the counter argument to that is that if we don't do it, China's going to do it.
And do you think that other countries would be open to these ideas?
Do you think that China would be willing to entertain these ideas and recognize that this is in their own self-interest also to put the brakes on this?
So this is that 0.0001% chance that you think we have of getting out of this?
And you'd like to be wrong.
What do we have to do to make that a reality?
And if this understanding of the dangers is made available to the general public, because I think...
Right now, there's a small percentage of people that are really terrified of AI.
And the problem is the advancements are happening so quickly.
By the time that everyone's aware of it, it'll be too late.
What can we do other than have this conversation?
What can we do to sort of accelerate people's understandings of what's at stake?
Yeah, that's pretty high, but yours is like 99.9.
And again, stock options, financial incentives, they continue to build it and they continue to scale and make it more and more powerful.
Do you think that it could be viewed the same way we do view nuclear weapons and this mutually assured destruction idea would keep us from implementing it?
I think we've covered it.
How would you do that?
How would you set something like that up?
Until now, educate yourself, people.
AI, unexplainable, unpredictable, uncontrollable.
Did you do an audio book?
Still working on it?
Why don't they just do it in your voice with AI?
But yet you still used it.
You could put it on Amazon?
It's like, steal this book.
Well, more people need to read it and more people need to listen to you.
And I urge people to listen to this podcast and also the one that you did with Lex, which I thought was fascinating, which scared the shit out of me, which is why we have this one.
I appreciate you sounding the alarm.
I really hope it helps.
And the problem is, like I said, when I've talked to Marc Andreessen and many other people, they think this is just fear mongering.
This is worst case scenario.
So what is worst case scenario?
Like how could AI eventually lead to the destruction of the human race?
When did you start working on this?
And it would take superintelligence to create a safety mechanism to control superintelligence.
Have you thought about the possibility that this is the role of the human race and that this happens all throughout the cosmos?
Is that curious humans who thrive on innovation will ultimately create a better version of life?