Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Dwarkesh Patel

๐Ÿ‘ค Speaker
14445 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And I'm curious why that was.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And they just couldn't internalize that you had your own?

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

The reason I think this question is so interesting is because the main story people have about AI exploding and getting to superintelligence pretty rapidly is AI automating, AI engineering, and AI research.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And so they'll look at the fact that you can have Cloud Code make entire application, CRUD applications from scratch and be like, if you had this same capability inside of OpenAI and DeepMind and everything, well, just imagine the level of like just, you know, a thousand of you or a million of you in parallel finding little architectural tweaks.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And so it's quite interesting to hear you say that this is the thing they're sort of asymmetrically worse at.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And it's like quite relevant to forecasting whether the AI 2027 type explosion is likely to happen anytime soon.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

I think that's a good way of putting it.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Very naive question, but the architectural tweaks that you're adding to NanoChat, they're in a paper somewhere, right?

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

They might even be in a repo somewhere.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

So is it surprising that they aren't able to integrate that into whenever you're like add rope embeddings or something, they do that in the wrong way?

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Yeah.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Actually, here's another reason why this is really interesting.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

Through the history of programming, there's been many productivity improvements, compilers, linting, better programming languages, etc.,

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

which have increased programmer productivity, but have not led to an explosion.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

So that sounds very much like autocomplete tab.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And this other category is just like automation of the programmer.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

And so it's interesting you're seeing more in the category of the historical analogies of better compilers or something.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

One of the big problems with RL is that it's incredibly information sparse.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

LabelBox can help you with this by increasing the amount of information that your agent gets to learn from with every single episode.

Dwarkesh Podcast
Andrej Karpathy โ€” AGI is still a decade away

For example, one of their customers wanted to train a coding agent.