Dylan Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
But within a piece, within a return from ChatGPT, it is not clear how you get a high quality placed ad within the output. And if you can do that with model costs coming down, you're you can just get super high revenue. Like that revenue is totally untapped and it's not clear technically how it is done.
And it could be very subtle. It could be in conversation. Like we have voice mode now. It could be some way of making it so the voice introduces certain things. It's much harder to measure and it takes imagination, but yeah.
And it could be very subtle. It could be in conversation. Like we have voice mode now. It could be some way of making it so the voice introduces certain things. It's much harder to measure and it takes imagination, but yeah.
And it could be very subtle. It could be in conversation. Like we have voice mode now. It could be some way of making it so the voice introduces certain things. It's much harder to measure and it takes imagination, but yeah.
They don't care about it right now. I think it's places like Perplexity are experimenting on that more.
They don't care about it right now. I think it's places like Perplexity are experimenting on that more.
They don't care about it right now. I think it's places like Perplexity are experimenting on that more.
Okay, so mostly the term agent is obviously overblown. We've talked a lot about reinforcement learning as a way to train for verifiable outcomes. Agents should mean something that is open-ended and is solving a task independently on its own and able to adapt to uncertainty.
Okay, so mostly the term agent is obviously overblown. We've talked a lot about reinforcement learning as a way to train for verifiable outcomes. Agents should mean something that is open-ended and is solving a task independently on its own and able to adapt to uncertainty.
Okay, so mostly the term agent is obviously overblown. We've talked a lot about reinforcement learning as a way to train for verifiable outcomes. Agents should mean something that is open-ended and is solving a task independently on its own and able to adapt to uncertainty.
There's a lot of the term agent applied to things like Apple intelligence, which we still don't have after the last WWDC, which is orchestrating between apps. And that type of tool use thing is something that language models can do really well. Apple intelligence, I suspect, will come eventually. It's a closed domain. It's your messages app integrating with your photos, with AI in the background.
There's a lot of the term agent applied to things like Apple intelligence, which we still don't have after the last WWDC, which is orchestrating between apps. And that type of tool use thing is something that language models can do really well. Apple intelligence, I suspect, will come eventually. It's a closed domain. It's your messages app integrating with your photos, with AI in the background.
There's a lot of the term agent applied to things like Apple intelligence, which we still don't have after the last WWDC, which is orchestrating between apps. And that type of tool use thing is something that language models can do really well. Apple intelligence, I suspect, will come eventually. It's a closed domain. It's your messages app integrating with your photos, with AI in the background.
That will work. That has been described as an agent by a lot of software companies to get into the narrative. The question is, what ways can we get language models to generalize to new domains and solve their own problems in real time?
That will work. That has been described as an agent by a lot of software companies to get into the narrative. The question is, what ways can we get language models to generalize to new domains and solve their own problems in real time?
That will work. That has been described as an agent by a lot of software companies to get into the narrative. The question is, what ways can we get language models to generalize to new domains and solve their own problems in real time?
Maybe some tiny amount of training when they are doing this with fine-tuning themselves or in-context learning, which is the idea of storing information in a prompt, and you can use learning algorithms to update that, and whether or not you believe that that is going to actually generalize to things like
Maybe some tiny amount of training when they are doing this with fine-tuning themselves or in-context learning, which is the idea of storing information in a prompt, and you can use learning algorithms to update that, and whether or not you believe that that is going to actually generalize to things like
Maybe some tiny amount of training when they are doing this with fine-tuning themselves or in-context learning, which is the idea of storing information in a prompt, and you can use learning algorithms to update that, and whether or not you believe that that is going to actually generalize to things like
And me saying, book my trip to go to Austin in two days, I have X, Y, Z constraints and actually trusting it. I think there's an HCI problem, coming back for information.