Dylan Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
They have this OpenAI ChatGPT Pro subscription, which is $200 a month. Which Sam said they're losing money on. Which means that people are burning a lot of GPUs on inference. And I'm I've signed up with it. I've played with it. I don't think I'm a power user, but I use it.
And it's like, that is the thing that a Chinese company with mediumly strong expert controls, there will always be loopholes, might not be able to do at all. And if that, the main result for O3 is also a spectacular coding performance. And if that feeds back into AI companies being able to experiment better.
And it's like, that is the thing that a Chinese company with mediumly strong expert controls, there will always be loopholes, might not be able to do at all. And if that, the main result for O3 is also a spectacular coding performance. And if that feeds back into AI companies being able to experiment better.
And it's like, that is the thing that a Chinese company with mediumly strong expert controls, there will always be loopholes, might not be able to do at all. And if that, the main result for O3 is also a spectacular coding performance. And if that feeds back into AI companies being able to experiment better.
This is what people like CEO or leaders of open AI and anthropic talk about is like autonomous AI models, which is you give them a task and they work on it in the background. I think my personal definition of AGI is much, simpler.
This is what people like CEO or leaders of open AI and anthropic talk about is like autonomous AI models, which is you give them a task and they work on it in the background. I think my personal definition of AGI is much, simpler.
This is what people like CEO or leaders of open AI and anthropic talk about is like autonomous AI models, which is you give them a task and they work on it in the background. I think my personal definition of AGI is much, simpler.
I think language models are a form of AGI and all of this super powerful stuff is a next step that's great if we get these tools, but a language model has so much value in so many domains. It is a general intelligence to me.
I think language models are a form of AGI and all of this super powerful stuff is a next step that's great if we get these tools, but a language model has so much value in so many domains. It is a general intelligence to me.
I think language models are a form of AGI and all of this super powerful stuff is a next step that's great if we get these tools, but a language model has so much value in so many domains. It is a general intelligence to me.
But this next step of agentic things where they're independent and they can do tasks that aren't in the training data is what the few year outlook that these AI companies are driving for.
But this next step of agentic things where they're independent and they can do tasks that aren't in the training data is what the few year outlook that these AI companies are driving for.
But this next step of agentic things where they're independent and they can do tasks that aren't in the training data is what the few year outlook that these AI companies are driving for.
And he has a much more positive view in his essay, Machines of Love and Grace. I've read into this. I don't have enough background in physical sciences to gauge exactly how competent I am in if AI can revolutionize biology. I'm safe saying that AI is going to accelerate the progress of any computational science.
And he has a much more positive view in his essay, Machines of Love and Grace. I've read into this. I don't have enough background in physical sciences to gauge exactly how competent I am in if AI can revolutionize biology. I'm safe saying that AI is going to accelerate the progress of any computational science.
And he has a much more positive view in his essay, Machines of Love and Grace. I've read into this. I don't have enough background in physical sciences to gauge exactly how competent I am in if AI can revolutionize biology. I'm safe saying that AI is going to accelerate the progress of any computational science.
I don't like to attribute specific abilities because predicting specific abilities and when is very hard. I think mostly if you're going to say that I'm feeling the AGI is that I expect continued rapid surprising progress over the next few years. So something like R1 is less surprising to me from deep seek because I expect there to be new paradigms where substantial progress can be made.
I don't like to attribute specific abilities because predicting specific abilities and when is very hard. I think mostly if you're going to say that I'm feeling the AGI is that I expect continued rapid surprising progress over the next few years. So something like R1 is less surprising to me from deep seek because I expect there to be new paradigms where substantial progress can be made.
I don't like to attribute specific abilities because predicting specific abilities and when is very hard. I think mostly if you're going to say that I'm feeling the AGI is that I expect continued rapid surprising progress over the next few years. So something like R1 is less surprising to me from deep seek because I expect there to be new paradigms where substantial progress can be made.
I think DeepSeq R1 is so unsettling because we're kind of on this path with ChatGPT. It's like, it's getting better, it's getting better, it's getting better. And then we have a new direction for changing the models. And we took one step like this, and we took a step up. So it looks like a really fast slope, and then we're going to just take more steps.