Dylan Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think DeepSeq R1 is so unsettling because we're kind of on this path with ChatGPT. It's like, it's getting better, it's getting better, it's getting better. And then we have a new direction for changing the models. And we took one step like this, and we took a step up. So it looks like a really fast slope, and then we're going to just take more steps.
I think DeepSeq R1 is so unsettling because we're kind of on this path with ChatGPT. It's like, it's getting better, it's getting better, it's getting better. And then we have a new direction for changing the models. And we took one step like this, and we took a step up. So it looks like a really fast slope, and then we're going to just take more steps.
So it's just really unsettling when you have these big steps. And I expect that to keep happening. I've tried OpenAI Operator. I've tried Cloud computer use. They're not there yet. I understand the idea. But it's just so hard to predict what is the breakthrough that will make something like that work.
So it's just really unsettling when you have these big steps. And I expect that to keep happening. I've tried OpenAI Operator. I've tried Cloud computer use. They're not there yet. I understand the idea. But it's just so hard to predict what is the breakthrough that will make something like that work.
So it's just really unsettling when you have these big steps. And I expect that to keep happening. I've tried OpenAI Operator. I've tried Cloud computer use. They're not there yet. I understand the idea. But it's just so hard to predict what is the breakthrough that will make something like that work.
And I think it's more likely that we have breakthroughs that work and things that we don't know what they're going to do. So everyone wants agents. Dario has a very eloquent way of describing this. And I just think that it's like, there's going to be more than that. So just expect these things to come.
And I think it's more likely that we have breakthroughs that work and things that we don't know what they're going to do. So everyone wants agents. Dario has a very eloquent way of describing this. And I just think that it's like, there's going to be more than that. So just expect these things to come.
And I think it's more likely that we have breakthroughs that work and things that we don't know what they're going to do. So everyone wants agents. Dario has a very eloquent way of describing this. And I just think that it's like, there's going to be more than that. So just expect these things to come.
There's some research that shows that the distribution is actually the limiting factor. So language models haven't yet made misinformation particularly change the equation there. The internet is still ongoing. I think there's a blog, AI Snake Oil, and some of my friends at Princeton that write on this stuff. So there is research.
There's some research that shows that the distribution is actually the limiting factor. So language models haven't yet made misinformation particularly change the equation there. The internet is still ongoing. I think there's a blog, AI Snake Oil, and some of my friends at Princeton that write on this stuff. So there is research.
There's some research that shows that the distribution is actually the limiting factor. So language models haven't yet made misinformation particularly change the equation there. The internet is still ongoing. I think there's a blog, AI Snake Oil, and some of my friends at Princeton that write on this stuff. So there is research.
It's a default that everyone assumes, and I would have thought the same thing, is that Misinformation doesn't get far worse with language models. I think in terms of internet posts and things that people have been measuring, it hasn't been a exponential increase or something extremely measurable.
It's a default that everyone assumes, and I would have thought the same thing, is that Misinformation doesn't get far worse with language models. I think in terms of internet posts and things that people have been measuring, it hasn't been a exponential increase or something extremely measurable.
It's a default that everyone assumes, and I would have thought the same thing, is that Misinformation doesn't get far worse with language models. I think in terms of internet posts and things that people have been measuring, it hasn't been a exponential increase or something extremely measurable.
And things you're talking about with like voice calls and stuff like that, it could be in modalities that are harder to measure. So it's something that it's too soon to tell in terms of, I think that's like political instability via the web is very, it's monitored by a lot of researchers to see what's happening. I think that you're asking about like the AGI thing.
And things you're talking about with like voice calls and stuff like that, it could be in modalities that are harder to measure. So it's something that it's too soon to tell in terms of, I think that's like political instability via the web is very, it's monitored by a lot of researchers to see what's happening. I think that you're asking about like the AGI thing.
And things you're talking about with like voice calls and stuff like that, it could be in modalities that are harder to measure. So it's something that it's too soon to tell in terms of, I think that's like political instability via the web is very, it's monitored by a lot of researchers to see what's happening. I think that you're asking about like the AGI thing.
If you make me give a year, I would be like, okay, I have AI CEOs saying this. They've been saying two years for a while. I think that they're... People like Dario Anthropic, the CEO, had thought about this so deeply. I need to take their word seriously, but also understand that they have different incentives.
If you make me give a year, I would be like, okay, I have AI CEOs saying this. They've been saying two years for a while. I think that they're... People like Dario Anthropic, the CEO, had thought about this so deeply. I need to take their word seriously, but also understand that they have different incentives.
If you make me give a year, I would be like, okay, I have AI CEOs saying this. They've been saying two years for a while. I think that they're... People like Dario Anthropic, the CEO, had thought about this so deeply. I need to take their word seriously, but also understand that they have different incentives.