Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Aman Sanger

👤 Person
1050 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. I think I can speak to a few of the details on how to make these things work. They're incredibly low latency, so you need to train small models on this task. In particular... they're incredibly pre-filled token hungry. What that means is they have these really, really long prompts where they see a lot of your code and they're not actually generating that many tokens.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. I think I can speak to a few of the details on how to make these things work. They're incredibly low latency, so you need to train small models on this task. In particular... they're incredibly pre-filled token hungry. What that means is they have these really, really long prompts where they see a lot of your code and they're not actually generating that many tokens.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so the perfect fit for that is using a sparse model, meaning an MOE model. Um, so that was kind of one, one breakthrough, one breakthrough we made that substantially improved performance at longer context. The other being, um, a variant of speculative decoding that we kind of built out called speculative edits.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so the perfect fit for that is using a sparse model, meaning an MOE model. Um, so that was kind of one, one breakthrough, one breakthrough we made that substantially improved performance at longer context. The other being, um, a variant of speculative decoding that we kind of built out called speculative edits.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so the perfect fit for that is using a sparse model, meaning an MOE model. Um, so that was kind of one, one breakthrough, one breakthrough we made that substantially improved performance at longer context. The other being, um, a variant of speculative decoding that we kind of built out called speculative edits.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

These are two, I think, important pieces of what make it quite high quality and very fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

These are two, I think, important pieces of what make it quite high quality and very fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

These are two, I think, important pieces of what make it quite high quality and very fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Caching plays a huge role. Because you're dealing with this many input tokens, if every single keystroke that you're typing in a given line, you had to rerun the model on all of those tokens passed in, you're just going to, one, significantly degrade latency, two, you're going to kill your GPUs with load. So you need to design the actual prompts used for the model such that they're caching aware.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Caching plays a huge role. Because you're dealing with this many input tokens, if every single keystroke that you're typing in a given line, you had to rerun the model on all of those tokens passed in, you're just going to, one, significantly degrade latency, two, you're going to kill your GPUs with load. So you need to design the actual prompts used for the model such that they're caching aware.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Caching plays a huge role. Because you're dealing with this many input tokens, if every single keystroke that you're typing in a given line, you had to rerun the model on all of those tokens passed in, you're just going to, one, significantly degrade latency, two, you're going to kill your GPUs with load. So you need to design the actual prompts used for the model such that they're caching aware.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And then, yeah, you need to reuse the KV cache across requests just so that you're spending less work, less compute.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And then, yeah, you need to reuse the KV cache across requests just so that you're spending less work, less compute.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And then, yeah, you need to reuse the KV cache across requests just so that you're spending less work, less compute.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

This is what we're talking about.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

This is what we're talking about.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

This is what we're talking about.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And there's a chance this is also not the final version of it.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And there's a chance this is also not the final version of it.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And there's a chance this is also not the final version of it.