Lee Cronin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And I only know one way to embody intelligence, and that's in chemistry and human brains. So category error number one is they have agency. Category error number two is saying that, assuming that anything we make is going to be more intelligent. Now, you didn't say super intelligent. I'll put the words into our mouths here, super intelligent. I think that there is no...
And I only know one way to embody intelligence, and that's in chemistry and human brains. So category error number one is they have agency. Category error number two is saying that, assuming that anything we make is going to be more intelligent. Now, you didn't say super intelligent. I'll put the words into our mouths here, super intelligent. I think that there is no...
And I only know one way to embody intelligence, and that's in chemistry and human brains. So category error number one is they have agency. Category error number two is saying that, assuming that anything we make is going to be more intelligent. Now, you didn't say super intelligent. I'll put the words into our mouths here, super intelligent. I think that there is no...
No reason to expect that we are going to make systems that are more intelligent, more capable. When people play chess computers, they don't expect to win now. The chess computer is very good at chess. That doesn't mean it's super intelligent. So I think that superintelligence, I mean, I think even Nick Bostrom is pulling back on this now because he invented this. So I see this a lot.
No reason to expect that we are going to make systems that are more intelligent, more capable. When people play chess computers, they don't expect to win now. The chess computer is very good at chess. That doesn't mean it's super intelligent. So I think that superintelligence, I mean, I think even Nick Bostrom is pulling back on this now because he invented this. So I see this a lot.
No reason to expect that we are going to make systems that are more intelligent, more capable. When people play chess computers, they don't expect to win now. The chess computer is very good at chess. That doesn't mean it's super intelligent. So I think that superintelligence, I mean, I think even Nick Bostrom is pulling back on this now because he invented this. So I see this a lot.
When did this first happen? Eric Drexler, nanotechnology, atomically precise machines. He came up with a world where we had these atom cogs everywhere. They were going to make self-replicating nanobots. Not possible. Why? Because there's no resources to build these self-replicating nanobots. You can't get the precision. It doesn't work.
When did this first happen? Eric Drexler, nanotechnology, atomically precise machines. He came up with a world where we had these atom cogs everywhere. They were going to make self-replicating nanobots. Not possible. Why? Because there's no resources to build these self-replicating nanobots. You can't get the precision. It doesn't work.
When did this first happen? Eric Drexler, nanotechnology, atomically precise machines. He came up with a world where we had these atom cogs everywhere. They were going to make self-replicating nanobots. Not possible. Why? Because there's no resources to build these self-replicating nanobots. You can't get the precision. It doesn't work.
It was a major category error in taking engineering principles down to the molecular level. The only functioning nanomolecular technology we know produced by evolution. There. So now let's go forward to AGI. What is AGI? We don't know. It's super. It can do this. Humans can't think. That... I would argue the only AGIs that exist in the universe are produced by evolution.
It was a major category error in taking engineering principles down to the molecular level. The only functioning nanomolecular technology we know produced by evolution. There. So now let's go forward to AGI. What is AGI? We don't know. It's super. It can do this. Humans can't think. That... I would argue the only AGIs that exist in the universe are produced by evolution.
It was a major category error in taking engineering principles down to the molecular level. The only functioning nanomolecular technology we know produced by evolution. There. So now let's go forward to AGI. What is AGI? We don't know. It's super. It can do this. Humans can't think. That... I would argue the only AGIs that exist in the universe are produced by evolution.
And sure, we may be able to make our working memory better. We might be able to do more things. The human brain is the most compact computing unit in the universe. It uses 20 watts. It uses a really limited volume. It's not like a chat GPT cluster, which has to have thousands of watts model that's generated and has to be corrected by human beings. You are autonomous and embodied intelligence.
And sure, we may be able to make our working memory better. We might be able to do more things. The human brain is the most compact computing unit in the universe. It uses 20 watts. It uses a really limited volume. It's not like a chat GPT cluster, which has to have thousands of watts model that's generated and has to be corrected by human beings. You are autonomous and embodied intelligence.
And sure, we may be able to make our working memory better. We might be able to do more things. The human brain is the most compact computing unit in the universe. It uses 20 watts. It uses a really limited volume. It's not like a chat GPT cluster, which has to have thousands of watts model that's generated and has to be corrected by human beings. You are autonomous and embodied intelligence.
So I think that there are so many levels that we're missing out. We've just kind of went, oh, we've discovered fire. Oh, gosh, the planet's just going to burn one day randomly. I mean, I just don't understand that leap. There are bigger problems we need to worry about. So what is the motivation? Why are these people, let's assume they have their earnest, have this conviction?
So I think that there are so many levels that we're missing out. We've just kind of went, oh, we've discovered fire. Oh, gosh, the planet's just going to burn one day randomly. I mean, I just don't understand that leap. There are bigger problems we need to worry about. So what is the motivation? Why are these people, let's assume they have their earnest, have this conviction?
So I think that there are so many levels that we're missing out. We've just kind of went, oh, we've discovered fire. Oh, gosh, the planet's just going to burn one day randomly. I mean, I just don't understand that leap. There are bigger problems we need to worry about. So what is the motivation? Why are these people, let's assume they have their earnest, have this conviction?
Well, I think it's kind of they're making leaps that they're trapped in a virtual reality that isn't reality.
Well, I think it's kind of they're making leaps that they're trapped in a virtual reality that isn't reality.