Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Joscha Bach

๐Ÿ‘ค Speaker
1434 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And so what we should be doing is we should be working towards creating this equilibrium by working as hard as we can in all possible directions. And at least that's the way in which I understand the gist of effective accelerationism. And so when he asked me what I think about this position, I said, it's a very beautiful position and I suspect it's wrong, but not for obvious reasons.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And so what we should be doing is we should be working towards creating this equilibrium by working as hard as we can in all possible directions. And at least that's the way in which I understand the gist of effective accelerationism. And so when he asked me what I think about this position, I said, it's a very beautiful position and I suspect it's wrong, but not for obvious reasons.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And so what we should be doing is we should be working towards creating this equilibrium by working as hard as we can in all possible directions. And at least that's the way in which I understand the gist of effective accelerationism. And so when he asked me what I think about this position, I said, it's a very beautiful position and I suspect it's wrong, but not for obvious reasons.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And in this tweet, I tried to make a joke about my intuition, about what might be possibly wrong about it. So the Ruckus Baselisk and the Paperclip Maximizers are both boogeymen of the AI doomers. Ruckus Baselisk is the idea that there could be an AI that is going to punish everybody for eternity by stimulating them if they don't help in creating Ruckus Baselisk.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And in this tweet, I tried to make a joke about my intuition, about what might be possibly wrong about it. So the Ruckus Baselisk and the Paperclip Maximizers are both boogeymen of the AI doomers. Ruckus Baselisk is the idea that there could be an AI that is going to punish everybody for eternity by stimulating them if they don't help in creating Ruckus Baselisk.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And in this tweet, I tried to make a joke about my intuition, about what might be possibly wrong about it. So the Ruckus Baselisk and the Paperclip Maximizers are both boogeymen of the AI doomers. Ruckus Baselisk is the idea that there could be an AI that is going to punish everybody for eternity by stimulating them if they don't help in creating Ruckus Baselisk.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

It's probably a very good idea to get AI companies funded by going to VCs to tell them... Give us a million dollars or it's going to be a very ugly afterlife. And I think that there is a logical mistake in Ocus Basileus, which is why I'm not afraid of it. But it's still an interesting thought experiment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

It's probably a very good idea to get AI companies funded by going to VCs to tell them... Give us a million dollars or it's going to be a very ugly afterlife. And I think that there is a logical mistake in Ocus Basileus, which is why I'm not afraid of it. But it's still an interesting thought experiment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

It's probably a very good idea to get AI companies funded by going to VCs to tell them... Give us a million dollars or it's going to be a very ugly afterlife. And I think that there is a logical mistake in Ocus Basileus, which is why I'm not afraid of it. But it's still an interesting thought experiment.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I think that there is no retrocausation. So basically when Rokos Baselis is there, if it punishes you retroactively, it has to make this choice in the future. There is no mechanism that automatically creates a causal relationship between you now defecting against Rokos Baselis or serving Rokos Baselis.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I think that there is no retrocausation. So basically when Rokos Baselis is there, if it punishes you retroactively, it has to make this choice in the future. There is no mechanism that automatically creates a causal relationship between you now defecting against Rokos Baselis or serving Rokos Baselis.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

I think that there is no retrocausation. So basically when Rokos Baselis is there, if it punishes you retroactively, it has to make this choice in the future. There is no mechanism that automatically creates a causal relationship between you now defecting against Rokos Baselis or serving Rokos Baselis.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

After Rokos Baselis is in existence, it has no more reason to worry about punishing everybody else. So that would only work if you would be building something like a doomsday machine, as in Dr. Strangelove, something that inevitably gets triggered when somebody defects.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

After Rokos Baselis is in existence, it has no more reason to worry about punishing everybody else. So that would only work if you would be building something like a doomsday machine, as in Dr. Strangelove, something that inevitably gets triggered when somebody defects.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

After Rokos Baselis is in existence, it has no more reason to worry about punishing everybody else. So that would only work if you would be building something like a doomsday machine, as in Dr. Strangelove, something that inevitably gets triggered when somebody defects.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And because Hocus Positus doesn't exist yet to a point where this inevitability could be established, Hocus Positus is nothing that you need to be worried about. The other one is the paperclip maximizer, right? This idea that you could build some kind of golem that once starting to build paperclips is going to turn everything into paperclips.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And because Hocus Positus doesn't exist yet to a point where this inevitability could be established, Hocus Positus is nothing that you need to be worried about. The other one is the paperclip maximizer, right? This idea that you could build some kind of golem that once starting to build paperclips is going to turn everything into paperclips.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And because Hocus Positus doesn't exist yet to a point where this inevitability could be established, Hocus Positus is nothing that you need to be worried about. The other one is the paperclip maximizer, right? This idea that you could build some kind of golem that once starting to build paperclips is going to turn everything into paperclips.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And so the effective accelerationism position might be to say that you basically end up with these two entities being at each other's throats for eternity and thereby neutralizing each other. And as a side effect of neither of them being able to take over and each of them limiting the effects of the other, you would have a situation where you get all the nice effects of them, right?

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And so the effective accelerationism position might be to say that you basically end up with these two entities being at each other's throats for eternity and thereby neutralizing each other. And as a side effect of neither of them being able to take over and each of them limiting the effects of the other, you would have a situation where you get all the nice effects of them, right?