Ege Erdil
๐ค PersonAppearances Over Time
Podcast Appearances
having novel innovations that are very useful for unlocking innovations in the future.
So that might be introducing some novel way of thinking about a problem.
So maybe a good example might be in mathematics, where we have these reasoning models that are extremely good at solving math problems.
very short horizon math problems.
Maybe not extremely good, but certainly better than I can and better than maybe most undergrads can.
And so they can do that very well, but they're not very good at coming up with novel conceptual schemes that are useful for making progress in mathematics.
So, you know, it's able to solve these problems that you can kind of neatly excise out of some very messy contexts and it's able to make a lot of progress there.
But within some much messier contexts, it's kind of not very good at figuring out what directions are especially useful.
for you to build things or kind of make incremental progress on that enables you to have a big kind of innovation later down the line.
So thinking about both this larger context as well as maybe much longer horizon
much fuzzier things that you're optimizing for.
I think it's much worse at those types of things.
But that's a long time.
One mathematician might have been able to do a bunch of work over that time, and they have produced orders of magnitude fewer tokens on math.
That's right, that's right.
I think it's useful for us to explain like a very important framework for our thinking about what AI is good at and what AI is lagging in, which is this idea of kind of Moravec's paradox that things that seem very hard for humans, AI systems tend to make much faster progress on.
Whereas things that
look a bunch easier for us, kind of AI systems that totally struggle or often totally incapable of doing that thing.
And so, you know, this kind of abstract reasoning, you know, playing chess, playing Go,
Maybe playing Jeopardy, doing kind of advanced math and solving math problems.