Aman Sanger
👤 PersonAppearances Over Time
Podcast Appearances
And there's so many things that... like don't feel quite right. And I suspect in parallel to people increasing the amount of pre-training data and the size of the models and pre-training and finding tricks there, you'll now have this other thread of getting search to work better and better.
Yeah, I think most of the additional value from Cursor versus everything else out there is not just integrating the new model fast like 01. It comes from all of the kind of depth that goes into these custom models that you don't realize are working for you in kind of every facet of the product, as well as like the really thoughtful UX with every single feature.
Yeah, I think most of the additional value from Cursor versus everything else out there is not just integrating the new model fast like 01. It comes from all of the kind of depth that goes into these custom models that you don't realize are working for you in kind of every facet of the product, as well as like the really thoughtful UX with every single feature.
Yeah, I think most of the additional value from Cursor versus everything else out there is not just integrating the new model fast like 01. It comes from all of the kind of depth that goes into these custom models that you don't realize are working for you in kind of every facet of the product, as well as like the really thoughtful UX with every single feature.
Oh, yeah.
Oh, yeah.
Oh, yeah.
Yeah, I think there are three main kinds of synthetic data. The first is, so what is synthetic data first? So there's normal data, like non-synthetic data, which is just data that's naturally created, i.e. usually it'll be from humans having done things. So from some human process, you get this data. Synthetic data, the first one would be distillation.
Yeah, I think there are three main kinds of synthetic data. The first is, so what is synthetic data first? So there's normal data, like non-synthetic data, which is just data that's naturally created, i.e. usually it'll be from humans having done things. So from some human process, you get this data. Synthetic data, the first one would be distillation.
Yeah, I think there are three main kinds of synthetic data. The first is, so what is synthetic data first? So there's normal data, like non-synthetic data, which is just data that's naturally created, i.e. usually it'll be from humans having done things. So from some human process, you get this data. Synthetic data, the first one would be distillation.
So having a language model kind of output tokens or probability distributions over tokens. And then you can train some less capable model on this. This approach is not gonna get you a net, like more capable model than the original one that has produced the tokens.
So having a language model kind of output tokens or probability distributions over tokens. And then you can train some less capable model on this. This approach is not gonna get you a net, like more capable model than the original one that has produced the tokens.
So having a language model kind of output tokens or probability distributions over tokens. And then you can train some less capable model on this. This approach is not gonna get you a net, like more capable model than the original one that has produced the tokens.
but it's really useful for if there's some capability you want to elicit from some really expensive high latency model, you can then distill that down into some smaller task specific model. The second kind is when like one direction of the problem is easier than the reverse.
but it's really useful for if there's some capability you want to elicit from some really expensive high latency model, you can then distill that down into some smaller task specific model. The second kind is when like one direction of the problem is easier than the reverse.
but it's really useful for if there's some capability you want to elicit from some really expensive high latency model, you can then distill that down into some smaller task specific model. The second kind is when like one direction of the problem is easier than the reverse.
And so a great example of this is bug detection, like we mentioned earlier, where it's a lot easier to introduce reasonable looking bugs than it is to actually detect them. And this is probably the case for humans too. And so what you can do is you can get a model that's not trained in that much data, that's not that smart, to introduce a bunch of bugs in code.
And so a great example of this is bug detection, like we mentioned earlier, where it's a lot easier to introduce reasonable looking bugs than it is to actually detect them. And this is probably the case for humans too. And so what you can do is you can get a model that's not trained in that much data, that's not that smart, to introduce a bunch of bugs in code.
And so a great example of this is bug detection, like we mentioned earlier, where it's a lot easier to introduce reasonable looking bugs than it is to actually detect them. And this is probably the case for humans too. And so what you can do is you can get a model that's not trained in that much data, that's not that smart, to introduce a bunch of bugs in code.
And then you can use that to then train, use a synthetic data to train a model that can be really good at detecting bugs. The last category, I think, is, I guess, the main one that it feels like the big labs are doing for synthetic data, which is... producing text with language models that can then be verified easily.