Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Andrej Karpathy

๐Ÿ‘ค Speaker
3419 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

okay, well, take the, the, the, the, put in the training set of the LLM judge and say, this is not 100%, this is 0%.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

You can do this.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

But every time you do this, you get a new LLM and it still has adversarial examples.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

There's infinity adversarial examples.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And I think probably if you iterate this a few times, it'll probably be harder and harder to find adversarial examples.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

But I'm not 100% sure because this thing has a trillion parameters or whatnot.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

So I bet you the labs are trying.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

I don't actually, I still think, I still think we need other ideas.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

So like this idea of like a review solution and come up with synthetic examples such that when you train on them, you get better and like meta-learn it in some way.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And I think there's some papers that I'm starting to see pop out.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

I only am at a stage of like reading abstracts because a lot of these papers, you know, they're just ideas.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Someone has to actually like make it work on a frontier LLM lab scale.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

in full generality.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Because when you see these papers, they pop up and it's just like a little bit of noisy, you know?

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

It's cool ideas, but I haven't actually seen anyone convincingly show that this is possible.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

That said, the LLM labs are fairly closed, so who knows what they're doing now, but...

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Yeah, I do think that we're missing some aspects there.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

So as an example, when you're reading a book,

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

I almost feel like currently when LLMs are reading a book, what that means is we stretch out the sequence of text and the model is predicting the next token and it's getting some knowledge from that.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

That's not really what humans do, right?