Terence Tao
๐ค SpeakerAppearances Over Time
Podcast Appearances
Again, we'd been automated.
Even a lot of undergraduate mathematics, even before AI, like Wolfram Alpha, for example, it's not a language model, but it can solve a lot of undergraduate level math tasks.
So on the computational side, verifying routine things like having a problem and say, here's a problem in partial differential equations.
Could you solve it using any of the 20 standard techniques?
And the AI will say, yes, I've tried all 20.
I hear that 100 different permutations and here's my results.
And that type of thing, I think it will work very well.
Type of scaling to once you solve one problem to make the AI attack 100 adjacent problems.
The things that humans do still, so where the AI really struggles right now is knowing when it's made a wrong turn.
that it can say, oh, I'm going to solve this problem.
I'm going to split up this one into these two cases.
I'm going to try this technique.
And sometimes if you're lucky, it's a simple problem.
It's the right technique and you solve the problem.
And sometimes it will get, it will have a problem.
It would propose an approach which is just complete nonsense.
but it looks like a proof.
This is one annoying thing about LLM-generated mathematics.
We've had human-generated mathematics that's very low quality, like submissions of people who don't have the formal training and so forth.
But if a human proof is bad, you can tell it's bad pretty quickly.