Dwarkesh
👤 PersonAppearances Over Time
Podcast Appearances
MARK BLYTH, JR.
: Why is it that it's been, if you look at different models, even released by totally different companies, trained on potentially non-overlapping data sets, it's actually crazy how similar LLMs are to each other?
IGOR MINAR, Maybe the data sets are not as non-overlapping as it seems.
But there's some sense that it's like, even if an individual human might be less productive than the future AI, maybe there's something to the fact that human teams have more diversity than teams of AIs might have.
But how do we elicit meaningful diversity among AIs?
So I think just raising the temperature just results in gibberish.
I think you want something more like different scientists have different prejudices or different ideas.
How do you get that kind of diversity among AI agents?
Yeah.
And then I've heard you hint in the past about self-play as a way to either get data or match agents to other agents with equivalent intelligence to kick off learning, right?
How should we think about why there's no public proposals of this kind of thinking working with LLMs?
Yeah.
Final question.
What is research taste?
You're obviously...
The person in the world who is considered to have the best taste in doing research in AI, you were the co-author on many of the biggest, the biggest things that have happened in the history of deep learning from AlexNet to GPT-3 to so on.
What is it that, how do you characterize how you come up with these ideas?
All right.
We'll leave it there.
Thank you so much.