Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Terence Tao

๐Ÿ‘ค Speaker
2047 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

So I think we will, in 10 years, we will have it.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

many more, much closer results.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

It may not have the whole thing.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Yeah, so twin primes is somewhat close.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Riemann hypothesis, I have no, I mean, it has happened by accident, I think.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Right, yeah.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

It's states that are sort of viewed multiplicatively.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Like for questions only involving multiplication, no addition, the primes really do behave as randomly as you could hope.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

So there's a phenomenon in probability code, square root cancellation, that if you want to poll, say, America on some issue, and you ask one or two voters, and you may have sampled a bad sample, and then you get a really imprecise measurement of the

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

full average, but if you sample more and more people, the accuracy gets better and better, and the accuracy improves like the square root of the number of people you sample.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

So if you sample a thousand people, you can get like a 2-3% margin of error.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

So in the same sense, if you measure the primes in a certain applicative sense, there's a certain type of statistic you can measure, and it's called the Riemann's data function, and it fluctuates up and down.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

But in some sense, as you keep averaging more and more, if you sample more and more, the fluctuations should go down as if they were random.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

And there's a very precise way to quantify that, and the Riemann hypothesis is a very elegant way that captures this.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

But

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

As with many other ways in mathematics, we have very few tools to show that something really genuinely behaves really random.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

And this is actually not just a little bit random, but it's asking that it behaves as random as an actually random set, this square root cancellation.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

And we know, because of things related to the parity problem, actually, that most of us' usual techniques cannot hope to settle this question.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

The proof has to come out of left field.

Lex Fridman Podcast
#472 โ€“ Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Yeah, but what that is, no one has any serious proposal.