Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Aman Sanger

๐Ÿ‘ค Speaker
1050 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

kind of having the model ask questions about various pieces of the code. So you kind of take the pieces of the code, then prompt the model or have a model propose a question for that piece of code, and then add those as instruction finds new data points. And then in theory, this might unlock the model's ability to answer questions about that code base.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

kind of having the model ask questions about various pieces of the code. So you kind of take the pieces of the code, then prompt the model or have a model propose a question for that piece of code, and then add those as instruction finds new data points. And then in theory, this might unlock the model's ability to answer questions about that code base.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

kind of having the model ask questions about various pieces of the code. So you kind of take the pieces of the code, then prompt the model or have a model propose a question for that piece of code, and then add those as instruction finds new data points. And then in theory, this might unlock the model's ability to answer questions about that code base.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

I think test time compute is really, really interesting. So there's been the pre-training regime, which will kind of, as you scale up the amount of data and the size of your model, get you better and better performance, both on loss and then on downstream benchmarks and just general performance when we use it for coding or other tasks.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

I think test time compute is really, really interesting. So there's been the pre-training regime, which will kind of, as you scale up the amount of data and the size of your model, get you better and better performance, both on loss and then on downstream benchmarks and just general performance when we use it for coding or other tasks.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

I think test time compute is really, really interesting. So there's been the pre-training regime, which will kind of, as you scale up the amount of data and the size of your model, get you better and better performance, both on loss and then on downstream benchmarks and just general performance when we use it for coding or other tasks.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

We're starting to hit a bit of a data wall, meaning it's going to be hard to continue scaling up this regime.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

We're starting to hit a bit of a data wall, meaning it's going to be hard to continue scaling up this regime.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

We're starting to hit a bit of a data wall, meaning it's going to be hard to continue scaling up this regime.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

And so scaling up test time compute is an interesting way of now, you know, increasing the number of inference time flops that we use, but still getting like, like, yeah, as you increase the number of flops use inference time getting corresponding improvements in the performance of these models tremendously.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

And so scaling up test time compute is an interesting way of now, you know, increasing the number of inference time flops that we use, but still getting like, like, yeah, as you increase the number of flops use inference time getting corresponding improvements in the performance of these models tremendously.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

And so scaling up test time compute is an interesting way of now, you know, increasing the number of inference time flops that we use, but still getting like, like, yeah, as you increase the number of flops use inference time getting corresponding improvements in the performance of these models tremendously.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

Traditionally, we just had to literally train a bigger model that always used that many more flops. But now we could perhaps use the same size model and run it for longer to be able to get an answer at the quality of a much larger model. And so the really interesting thing I like about this is there are some problems that perhaps require

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

Traditionally, we just had to literally train a bigger model that always used that many more flops. But now we could perhaps use the same size model and run it for longer to be able to get an answer at the quality of a much larger model. And so the really interesting thing I like about this is there are some problems that perhaps require

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

Traditionally, we just had to literally train a bigger model that always used that many more flops. But now we could perhaps use the same size model and run it for longer to be able to get an answer at the quality of a much larger model. And so the really interesting thing I like about this is there are some problems that perhaps require

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

hundred trillion parameter model intelligence trained on a hundred trillion tokens. Um, but that's like maybe 1%, maybe like 0.1% of all queries. So are you going to spend all of this effort, all of this compute training model, uh,

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

hundred trillion parameter model intelligence trained on a hundred trillion tokens. Um, but that's like maybe 1%, maybe like 0.1% of all queries. So are you going to spend all of this effort, all of this compute training model, uh,

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

hundred trillion parameter model intelligence trained on a hundred trillion tokens. Um, but that's like maybe 1%, maybe like 0.1% of all queries. So are you going to spend all of this effort, all of this compute training model, uh,

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

that costs that much and then run it so infrequently, it feels completely wasteful when instead you get the model that can, that you train the model that's capable of doing the 99.9% of queries, then you have a way of inference time running it longer for those few people that really, really want max intelligence.

Lex Fridman Podcast
#446 โ€“ Ed Barnhart: Maya, Aztec, Inca, and Lost Civilizations of South America

that costs that much and then run it so infrequently, it feels completely wasteful when instead you get the model that can, that you train the model that's capable of doing the 99.9% of queries, then you have a way of inference time running it longer for those few people that really, really want max intelligence.