Alex Wagner
๐ค SpeakerAppearances Over Time
Podcast Appearances
So basically what this team in China has done is they have made an architectural breakthrough. And I'm going to try to explain this in plain English in the development of AI models. So basically the way that Silicon Valley has been approaching the development of AI models to date.
So basically what this team in China has done is they have made an architectural breakthrough. And I'm going to try to explain this in plain English in the development of AI models. So basically the way that Silicon Valley has been approaching the development of AI models to date.
has been you put about as much money as possible into building these things by building massive data centers and throwing as much data as you possibly can into the process. And then you get better results. And that's proved true every time. And the architectural advancement that's been made in China is they have been able to build a model that's as good with much less money.
has been you put about as much money as possible into building these things by building massive data centers and throwing as much data as you possibly can into the process. And then you get better results. And that's proved true every time. And the architectural advancement that's been made in China is they have been able to build a model that's as good with much less money.
has been you put about as much money as possible into building these things by building massive data centers and throwing as much data as you possibly can into the process. And then you get better results. And that's proved true every time. And the architectural advancement that's been made in China is they have been able to build a model that's as good with much less money.
And this is the most important thing that costs about three to five percent of what it costs the other models to run. So let's say you're spending a dollar to run an algorithm or some sort of process with OpenAI. You can spend five cents to do it. With this Chinese model, bringing down the cost of using things like their chatbot, but also building any application on top of the model.
And this is the most important thing that costs about three to five percent of what it costs the other models to run. So let's say you're spending a dollar to run an algorithm or some sort of process with OpenAI. You can spend five cents to do it. With this Chinese model, bringing down the cost of using things like their chatbot, but also building any application on top of the model.
And this is the most important thing that costs about three to five percent of what it costs the other models to run. So let's say you're spending a dollar to run an algorithm or some sort of process with OpenAI. You can spend five cents to do it. With this Chinese model, bringing down the cost of using things like their chatbot, but also building any application on top of the model.
So first of all, it's so funny because all of the worries about AI was that it was too expensive, right? People were seeing the fact that you have to spend these billions of dollars. Like OpenAI last year raised the biggest funding round in history at $6.6 billion. And the big complaint was, well, this... AI technology is too expensive to use.
So first of all, it's so funny because all of the worries about AI was that it was too expensive, right? People were seeing the fact that you have to spend these billions of dollars. Like OpenAI last year raised the biggest funding round in history at $6.6 billion. And the big complaint was, well, this... AI technology is too expensive to use.
So first of all, it's so funny because all of the worries about AI was that it was too expensive, right? People were seeing the fact that you have to spend these billions of dollars. Like OpenAI last year raised the biggest funding round in history at $6.6 billion. And the big complaint was, well, this... AI technology is too expensive to use.
They're losing billions just to run it and train it every year. And so therefore the industry is going to fall apart. Now everyone's worried because it's too cheap, which I think is just so funny. But basically, look, this is the way that the AI industry was always running. OpenAI's stated goal was to make intelligence that's too cheap to meter.
They're losing billions just to run it and train it every year. And so therefore the industry is going to fall apart. Now everyone's worried because it's too cheap, which I think is just so funny. But basically, look, this is the way that the AI industry was always running. OpenAI's stated goal was to make intelligence that's too cheap to meter.
They're losing billions just to run it and train it every year. And so therefore the industry is going to fall apart. Now everyone's worried because it's too cheap, which I think is just so funny. But basically, look, this is the way that the AI industry was always running. OpenAI's stated goal was to make intelligence that's too cheap to meter.
Basically, the idea was we want to be able to provide this stuff at a cost that is so inexpensive that you'll be able to do whatever your heart's desire is to build with AI. And, you know, really what these Chinese engineers have done is they have used some new techniques that have largely been like thanks to some of the constraints that they've had.
Basically, the idea was we want to be able to provide this stuff at a cost that is so inexpensive that you'll be able to do whatever your heart's desire is to build with AI. And, you know, really what these Chinese engineers have done is they have used some new techniques that have largely been like thanks to some of the constraints that they've had.
Basically, the idea was we want to be able to provide this stuff at a cost that is so inexpensive that you'll be able to do whatever your heart's desire is to build with AI. And, you know, really what these Chinese engineers have done is they have used some new techniques that have largely been like thanks to some of the constraints that they've had.
So they haven't been able to use the state of the art NVIDIA chips, which means this process that we're doing over here in the United States of just making the servers bigger and making, you know, adding more data has not been available to them. So they've had to introduce some tricks to make the models more efficient.
So they haven't been able to use the state of the art NVIDIA chips, which means this process that we're doing over here in the United States of just making the servers bigger and making, you know, adding more data has not been available to them. So they've had to introduce some tricks to make the models more efficient.
So they haven't been able to use the state of the art NVIDIA chips, which means this process that we're doing over here in the United States of just making the servers bigger and making, you know, adding more data has not been available to them. So they've had to introduce some tricks to make the models more efficient.