Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Aman Sanger

👤 Person
1050 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

You can use terminal context as well inside of check, mank, kind of everything. We don't have the looping part yet, though we suspect something like this could make a lot of sense. There's a question of whether it happens in the foreground too, or if it happens in the background, like what we've been discussing.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

It would be really interesting if you could branch a file system.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

It would be really interesting if you could branch a file system.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

It would be really interesting if you could branch a file system.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah, it's called the Merkel tree.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah, it's called the Merkel tree.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah, it's called the Merkel tree.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. And there are a lot of clever things, like additional things that go into this indexing system. For example, the bottleneck in terms of costs is not storing things in the vector database or the database. It's actually embedding the code.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. And there are a lot of clever things, like additional things that go into this indexing system. For example, the bottleneck in terms of costs is not storing things in the vector database or the database. It's actually embedding the code.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

Yeah. And there are a lot of clever things, like additional things that go into this indexing system. For example, the bottleneck in terms of costs is not storing things in the vector database or the database. It's actually embedding the code.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you don't want to re-embed the code base for every single person in a company that is using the same exact code, except for maybe they're in a different branch with a few different files or they've made a few local changes.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you don't want to re-embed the code base for every single person in a company that is using the same exact code, except for maybe they're in a different branch with a few different files or they've made a few local changes.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you don't want to re-embed the code base for every single person in a company that is using the same exact code, except for maybe they're in a different branch with a few different files or they've made a few local changes.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so because, again, embeddings are the bottleneck, you can do one clever trick and not have to worry about the complexity of dealing with branches and the other databases where you just... have some cache on the actual vectors computed from the hash of a given chunk. And so this means that when the nth person at a company goes and invents their code base, it's really, really fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so because, again, embeddings are the bottleneck, you can do one clever trick and not have to worry about the complexity of dealing with branches and the other databases where you just... have some cache on the actual vectors computed from the hash of a given chunk. And so this means that when the nth person at a company goes and invents their code base, it's really, really fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And so because, again, embeddings are the bottleneck, you can do one clever trick and not have to worry about the complexity of dealing with branches and the other databases where you just... have some cache on the actual vectors computed from the hash of a given chunk. And so this means that when the nth person at a company goes and invents their code base, it's really, really fast.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you do all this without actually storing any code on our servers at all. No code data is stored. We just store the vectors in the vector database and the vector cache.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you do all this without actually storing any code on our servers at all. No code data is stored. We just store the vectors in the vector database and the vector cache.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

And you do all this without actually storing any code on our servers at all. No code data is stored. We just store the vectors in the vector database and the vector cache.

Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI

I think, like you mentioned, in the future, I think this is only going to get more and more powerful where we're working a lot on improving the quality of our retrieval. And I think the ceiling for that is really, really much higher than people give it credit for.