Rene Haas
๐ค SpeakerAppearances Over Time
Podcast Appearances
They supplement work, but they don't necessarily replace work. But if you start to get into agents that can do real level of work that can replace what people might need to do in terms of thinking and reasoning, then that gets fairly interesting. And then you say, well, how's that going to happen? Well, We're not there yet, so we need to train more models.
They supplement work, but they don't necessarily replace work. But if you start to get into agents that can do real level of work that can replace what people might need to do in terms of thinking and reasoning, then that gets fairly interesting. And then you say, well, how's that going to happen? Well, We're not there yet, so we need to train more models.
The models need to get more sophisticated, et cetera, et cetera. So I think the training thing continues for a bit, but I can see as we get to some level of AI agent that reasons close to the way a human does, then I think it asymptotes on some level.
The models need to get more sophisticated, et cetera, et cetera. So I think the training thing continues for a bit, but I can see as we get to some level of AI agent that reasons close to the way a human does, then I think it asymptotes on some level.
The models need to get more sophisticated, et cetera, et cetera. So I think the training thing continues for a bit, but I can see as we get to some level of AI agent that reasons close to the way a human does, then I think it asymptotes on some level.
I don't think training can be unabated because at some point in time, you get more now into specialized training models as opposed to general purpose models, and that requires less resources.
I don't think training can be unabated because at some point in time, you get more now into specialized training models as opposed to general purpose models, and that requires less resources.
I don't think training can be unabated because at some point in time, you get more now into specialized training models as opposed to general purpose models, and that requires less resources.
I know he has his own definitions for AGI and he has reasons for those definitions. I don't subscribe so much to what is AGI versus ASI, artificial superintelligence, but I think more around when these AI agents start to think and reason and invent. And to me, that is a bit of a cross the Rubicon moment, right? For example, Chat GPT can do a decent job of passing the bar exam.
I know he has his own definitions for AGI and he has reasons for those definitions. I don't subscribe so much to what is AGI versus ASI, artificial superintelligence, but I think more around when these AI agents start to think and reason and invent. And to me, that is a bit of a cross the Rubicon moment, right? For example, Chat GPT can do a decent job of passing the bar exam.
I know he has his own definitions for AGI and he has reasons for those definitions. I don't subscribe so much to what is AGI versus ASI, artificial superintelligence, but I think more around when these AI agents start to think and reason and invent. And to me, that is a bit of a cross the Rubicon moment, right? For example, Chat GPT can do a decent job of passing the bar exam.
But to some extent, you'd say load enough logic and load enough information into the model, and the answers are there somewhere. And to what level is the AI model a stochastic parrot and just repeats everything that it has found over the internet? Because at the end of the day, you're only as good as the model that you've trained on is only as good as the data.
But to some extent, you'd say load enough logic and load enough information into the model, and the answers are there somewhere. And to what level is the AI model a stochastic parrot and just repeats everything that it has found over the internet? Because at the end of the day, you're only as good as the model that you've trained on is only as good as the data.
But to some extent, you'd say load enough logic and load enough information into the model, and the answers are there somewhere. And to what level is the AI model a stochastic parrot and just repeats everything that it has found over the internet? Because at the end of the day, you're only as good as the model that you've trained on is only as good as the data.
But when the model gets to a point where it can think and reason and invent, create new concepts, new products, new ideas, to me, that's kind of AGI when you get to that level. And I think, I don't know if we're a year away, but I would say we are a lot closer. If you would ask me this question a year ago, I would have said it's quite a ways away.
But when the model gets to a point where it can think and reason and invent, create new concepts, new products, new ideas, to me, that's kind of AGI when you get to that level. And I think, I don't know if we're a year away, but I would say we are a lot closer. If you would ask me this question a year ago, I would have said it's quite a ways away.
But when the model gets to a point where it can think and reason and invent, create new concepts, new products, new ideas, to me, that's kind of AGI when you get to that level. And I think, I don't know if we're a year away, but I would say we are a lot closer. If you would ask me this question a year ago, I would have said it's quite a ways away.
You ask me that question now, I say it's much closer. What is much closer, two years, three years? Probably. And I'm probably going to be wrong on that front. You know, every time I interact with some of the partners who are working on their models, whether it's at Google or OpenAI, and they show us the demos, it's breathtaking in terms of the kind of advancements that they're making.
You ask me that question now, I say it's much closer. What is much closer, two years, three years? Probably. And I'm probably going to be wrong on that front. You know, every time I interact with some of the partners who are working on their models, whether it's at Google or OpenAI, and they show us the demos, it's breathtaking in terms of the kind of advancements that they're making.
You ask me that question now, I say it's much closer. What is much closer, two years, three years? Probably. And I'm probably going to be wrong on that front. You know, every time I interact with some of the partners who are working on their models, whether it's at Google or OpenAI, and they show us the demos, it's breathtaking in terms of the kind of advancements that they're making.