Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Aman Sanger

👤 Person
1050 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. I mean, it's really interesting that these models are so bad at bug finding when just naively prompted to find a bug. They're incredibly poorly calibrated.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. I mean, it's really interesting that these models are so bad at bug finding when just naively prompted to find a bug. They're incredibly poorly calibrated.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Exactly. Even 01. How do you explain that?

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Exactly. Even 01. How do you explain that?

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Exactly. Even 01. How do you explain that?

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

I think these models are really strong reflection of the pre-training distribution. And, you know, I do think they, they generalize as the loss gets lower and lower, but I don't think the, the loss and the scale is quite, or the loss is low enough such that they're like really fully generalizing in code. Like the things that we use these things for, uh, the frontier models, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

I think these models are really strong reflection of the pre-training distribution. And, you know, I do think they, they generalize as the loss gets lower and lower, but I don't think the, the loss and the scale is quite, or the loss is low enough such that they're like really fully generalizing in code. Like the things that we use these things for, uh, the frontier models, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

I think these models are really strong reflection of the pre-training distribution. And, you know, I do think they, they generalize as the loss gets lower and lower, but I don't think the, the loss and the scale is quite, or the loss is low enough such that they're like really fully generalizing in code. Like the things that we use these things for, uh, the frontier models, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

that they're quite good at are really code generation and question answering. And these things exist in massive quantities and pre-training with all of the code on GitHub on the scale of many, many trillions of tokens and questions and answers on things like stack overflow and maybe GitHub issues. And so when you try to push into these things that really don't exist, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

that they're quite good at are really code generation and question answering. And these things exist in massive quantities and pre-training with all of the code on GitHub on the scale of many, many trillions of tokens and questions and answers on things like stack overflow and maybe GitHub issues. And so when you try to push into these things that really don't exist, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

that they're quite good at are really code generation and question answering. And these things exist in massive quantities and pre-training with all of the code on GitHub on the scale of many, many trillions of tokens and questions and answers on things like stack overflow and maybe GitHub issues. And so when you try to push into these things that really don't exist, uh,

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

very much online, like, for example, the cursor tap objective of predicting the next edit given the edits done so far. The brittleness kind of shows. And then bug detection is another great example where there aren't really that many examples of actually detecting real bugs and then proposing fixes. And the models just really struggle at it. But I think it's a question of transferring the model.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

very much online, like, for example, the cursor tap objective of predicting the next edit given the edits done so far. The brittleness kind of shows. And then bug detection is another great example where there aren't really that many examples of actually detecting real bugs and then proposing fixes. And the models just really struggle at it. But I think it's a question of transferring the model.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

very much online, like, for example, the cursor tap objective of predicting the next edit given the edits done so far. The brittleness kind of shows. And then bug detection is another great example where there aren't really that many examples of actually detecting real bugs and then proposing fixes. And the models just really struggle at it. But I think it's a question of transferring the model.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

In the same way that you get this fantastic transfer from pre-trained models just on code in general to the cursor tab objective, you'll see a very, very similar thing with generalized models that are really good at code to bug detection. It just takes a little bit of nudging in that direction.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

In the same way that you get this fantastic transfer from pre-trained models just on code in general to the cursor tab objective, you'll see a very, very similar thing with generalized models that are really good at code to bug detection. It just takes a little bit of nudging in that direction.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

In the same way that you get this fantastic transfer from pre-trained models just on code in general to the cursor tab objective, you'll see a very, very similar thing with generalized models that are really good at code to bug detection. It just takes a little bit of nudging in that direction.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

how paranoid is the user? But even then, if you're putting in a maximum paranoia, it still just doesn't quite get it.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

how paranoid is the user? But even then, if you're putting in a maximum paranoia, it still just doesn't quite get it.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

how paranoid is the user? But even then, if you're putting in a maximum paranoia, it still just doesn't quite get it.