Reid Hoffman
๐ค SpeakerAppearances Over Time
Podcast Appearances
All of these things come from this scale compute learning revolution. And that was probably 2014, 2015 was when I really got the... Vision hit me fully, and I went to my partners at Greylock, and I said, hey, look, I think we're going to still make a bunch of money doing this crypto stuff. I did a few things in Bitcoin and other things.
All of these things come from this scale compute learning revolution. And that was probably 2014, 2015 was when I really got the... Vision hit me fully, and I went to my partners at Greylock, and I said, hey, look, I think we're going to still make a bunch of money doing this crypto stuff. I did a few things in Bitcoin and other things.
All of these things come from this scale compute learning revolution. And that was probably 2014, 2015 was when I really got the... Vision hit me fully, and I went to my partners at Greylock, and I said, hey, look, I think we're going to still make a bunch of money doing this crypto stuff. I did a few things in Bitcoin and other things.
All of these things come from this scale compute learning revolution. And that was probably 2014, 2015 was when I really got the... Vision hit me fully, and I went to my partners at Greylock, and I said, hey, look, I think we're going to still make a bunch of money doing this crypto stuff. I did a few things in Bitcoin and other things.
So keep doing it, but I'm going to focus entirely on AI because I think AI is going to be the next wave, and I want to start working on it right now.
So keep doing it, but I'm going to focus entirely on AI because I think AI is going to be the next wave, and I want to start working on it right now.
So keep doing it, but I'm going to focus entirely on AI because I think AI is going to be the next wave, and I want to start working on it right now.
So keep doing it, but I'm going to focus entirely on AI because I think AI is going to be the next wave, and I want to start working on it right now.
So keep doing it, but I'm going to focus entirely on AI because I think AI is going to be the next wave, and I want to start working on it right now.
I'm familiar with every single pessimist argument, and the frequent thing is something along the lines, let's take the most extreme. Can you guarantee me that we won't deliberately or accidentally create terminators? And you go, no, I can't guarantee. Oh, then it's really bad. We should have a cautionary principle. And you're like, well...
I'm familiar with every single pessimist argument, and the frequent thing is something along the lines, let's take the most extreme. Can you guarantee me that we won't deliberately or accidentally create terminators? And you go, no, I can't guarantee. Oh, then it's really bad. We should have a cautionary principle. And you're like, well...
I'm familiar with every single pessimist argument, and the frequent thing is something along the lines, let's take the most extreme. Can you guarantee me that we won't deliberately or accidentally create terminators? And you go, no, I can't guarantee. Oh, then it's really bad. We should have a cautionary principle. And you're like, well...
I'm familiar with every single pessimist argument, and the frequent thing is something along the lines, let's take the most extreme. Can you guarantee me that we won't deliberately or accidentally create terminators? And you go, no, I can't guarantee. Oh, then it's really bad. We should have a cautionary principle. And you're like, well...
I'm familiar with every single pessimist argument, and the frequent thing is something along the lines, let's take the most extreme. Can you guarantee me that we won't deliberately or accidentally create terminators? And you go, no, I can't guarantee. Oh, then it's really bad. We should have a cautionary principle. And you're like, well...
That's if you thought that the only thing here was creation of terminators or not. So like, for example, take, this is called existential risk. Let's take existential risk. Existential risk is a basket. It's not just, are there killer robots or not? It's also nuclear war, asteroids, pandemics, climate change, a bunch of other things.
That's if you thought that the only thing here was creation of terminators or not. So like, for example, take, this is called existential risk. Let's take existential risk. Existential risk is a basket. It's not just, are there killer robots or not? It's also nuclear war, asteroids, pandemics, climate change, a bunch of other things.
That's if you thought that the only thing here was creation of terminators or not. So like, for example, take, this is called existential risk. Let's take existential risk. Existential risk is a basket. It's not just, are there killer robots or not? It's also nuclear war, asteroids, pandemics, climate change, a bunch of other things.
That's if you thought that the only thing here was creation of terminators or not. So like, for example, take, this is called existential risk. Let's take existential risk. Existential risk is a basket. It's not just, are there killer robots or not? It's also nuclear war, asteroids, pandemics, climate change, a bunch of other things.
That's if you thought that the only thing here was creation of terminators or not. So like, for example, take, this is called existential risk. Let's take existential risk. Existential risk is a basket. It's not just, are there killer robots or not? It's also nuclear war, asteroids, pandemics, climate change, a bunch of other things.
So you say, well, if we create AI, is the basket, well, yes, we may add the terminator robots as a negative possibility, but is the basket of risk get better or worse? And I think it just gets better because it's the only way I can think of to solve pandemics. I think it's already helping in questions of advancing certain technologies around climate change.