Nilay Patel
👤 PersonAppearances Over Time
Podcast Appearances
Lenny Eusebi writes, Some version of this take is basically presented in every thread.
The point, though, with this discovery is that the model demonstrated the ability to take a set of known facts about the science and synthesize them into a novel hypothesis that proved to be correct using reasoning.
If you go look at these threads where inevitably this critique comes out, there are scientists who follow up, pointing out that there's really no such thing as scientific discovery created from whole cloth.
Everything is built on the synthesis of existing ideas.
Rob S. follows Lenny's post with, yes, that's how science is done.
VC Hemant Mahoptra writes, I've always believed new knowledge can be, one, built on existing knowledge but connecting the dots in unique ways, two, creating pure de novo knowledge through hypothesis, experiments, etc.
that might go against current thinking.
LLMs are likely great at one, and that's where perhaps a vast majority of the net new knowledge lies.
Even if LLMs as they stand today never get to number two, their impact on research will be tremendous.
Now, what makes this story notable to me, even outside just the profound implications of AI actually being able to help us cure cancer, is that it is not an isolated story.
For those who have been paying attention closely, and of course that's hard considering the absolute barrage of new models and crazy bubble talk and all those things going on, there have been a lot of these really subtle indicators that some big barrier has been surpassed.
OpenAI's Kevin Wheel, who used to be their chief product officer but is now their VP of science, about a week ago tweeted, GPT-5 crossed a major threshold.
Over the last two months, we've heard repeated examples of scientists successfully directing GPT-5 to do novel research in math, physics, biology, computer science, and more.
Now he clarified, I'm not claiming GPT-5 is ready to prove the Riemann hypothesis.
It's more at the lemma stage today when guided by an expert, it can do bounded chunks of novel science.
Things that would maybe have taken a professor on her postdoc a few days or a week to work through.
But this is the beginning of accelerating science because if each path takes a week, you can only explore so many of them.
If it takes 20 minutes with ChatGPT Pro and you can run them in parallel, suddenly you can explore far more.
And remember, the model you're using today is the worst it'll ever be for the rest of your life.
The idea that ChachiBT could do novel science sounded crazy a year ago, but here we are.