Lee Cronin
๐ค SpeakerAppearances Over Time
Podcast Appearances
We have really powerful machine learning tools, and they will allow us to do interesting things. And we need to be careful about how we use those tools in terms of manipulating human beings and faking stuff, right? Right.
We have really powerful machine learning tools, and they will allow us to do interesting things. And we need to be careful about how we use those tools in terms of manipulating human beings and faking stuff, right? Right.
No, not plus. I don't know. I was seeing on Twitter today various things, but I think Yudkowsky is at 95%.
No, not plus. I don't know. I was seeing on Twitter today various things, but I think Yudkowsky is at 95%.
No, not plus. I don't know. I was seeing on Twitter today various things, but I think Yudkowsky is at 95%.
Maybe. And what are the fees? I think Scott Aronson, I was quite surprised. I saw this online, so it could be wrong. So sorry if it's wrong. It says 2%. But the thing is, if someone said there's a 2% chance you're going to die going into the lift, would you go into the lift? In the elevator for the American English speaking audience. Well, no, not for the elevator.
Maybe. And what are the fees? I think Scott Aronson, I was quite surprised. I saw this online, so it could be wrong. So sorry if it's wrong. It says 2%. But the thing is, if someone said there's a 2% chance you're going to die going into the lift, would you go into the lift? In the elevator for the American English speaking audience. Well, no, not for the elevator.
Maybe. And what are the fees? I think Scott Aronson, I was quite surprised. I saw this online, so it could be wrong. So sorry if it's wrong. It says 2%. But the thing is, if someone said there's a 2% chance you're going to die going into the lift, would you go into the lift? In the elevator for the American English speaking audience. Well, no, not for the elevator.
So I would say anyone higher than 2%, I mean, I think there's a 0% chance of AGI doom.
So I would say anyone higher than 2%, I mean, I think there's a 0% chance of AGI doom.
So I would say anyone higher than 2%, I mean, I think there's a 0% chance of AGI doom.
I think this is, I would fail that argument 100%. Here's a number of reasons to fail it on. First of all, we don't know where the intention comes from. The problem is that people think, they keep, you know, watching all the hucksters online with the prompt engineering and all this stuff.
I think this is, I would fail that argument 100%. Here's a number of reasons to fail it on. First of all, we don't know where the intention comes from. The problem is that people think, they keep, you know, watching all the hucksters online with the prompt engineering and all this stuff.
I think this is, I would fail that argument 100%. Here's a number of reasons to fail it on. First of all, we don't know where the intention comes from. The problem is that people think, they keep, you know, watching all the hucksters online with the prompt engineering and all this stuff.
When I talk to a typical AI computer scientist, they keep talking about the AI as having some kind of decision-making ability. That is a category error. The decision-making ability comes from human beings. We have no understanding of how humans make decisions. We've just been discussing free will for the last half an hour, right? We don't even know what that is.
When I talk to a typical AI computer scientist, they keep talking about the AI as having some kind of decision-making ability. That is a category error. The decision-making ability comes from human beings. We have no understanding of how humans make decisions. We've just been discussing free will for the last half an hour, right? We don't even know what that is.
When I talk to a typical AI computer scientist, they keep talking about the AI as having some kind of decision-making ability. That is a category error. The decision-making ability comes from human beings. We have no understanding of how humans make decisions. We've just been discussing free will for the last half an hour, right? We don't even know what that is.
So the intention, I totally agree with you. People who intend to do bad things can do bad things, and we should not let that risk go. That's totally here and now. I do not want that to happen, and I'm happy to be regulated to make sure that systems I generate, whether they're like computer systems or You know, I'm working on a new project called Chem Machina. Nice. Well done.
So the intention, I totally agree with you. People who intend to do bad things can do bad things, and we should not let that risk go. That's totally here and now. I do not want that to happen, and I'm happy to be regulated to make sure that systems I generate, whether they're like computer systems or You know, I'm working on a new project called Chem Machina. Nice. Well done.
So the intention, I totally agree with you. People who intend to do bad things can do bad things, and we should not let that risk go. That's totally here and now. I do not want that to happen, and I'm happy to be regulated to make sure that systems I generate, whether they're like computer systems or You know, I'm working on a new project called Chem Machina. Nice. Well done.