Jyunmi
๐ค SpeakerAppearances Over Time
Podcast Appearances
So this is just that first stage.
So what that means is still need human expertise, animal studies, clinical trials and all of the normal steps.
Third, the model has a training data bias.
In some tests, it kept spitting out designs that look like a very common protein family.
Basically, it was leaning on its comfort zone.
And the team had to rebalance the data to reduce that effect.
But this kind of bias never disappears completely.
Fourth, the experiments are impressive but still limited.
They were done in collaborating labs that know the team and the tools.
We've not seen a random lab on the other side of the world download the code and run their own pipeline for their own targets and report similar successes.
So it's still fragile and young, but very promising.
So let's zoom out and take a look at how does this fit in the larger AI and science picture.
So at the micro level, inside drug discovery, this is another attack on a very specific pain point, finding first generation binders for hard disease targets.
At the missile level, across biology and chemistry, it joins a wave of models that do not just read the data, they propose new things.
I believe in the previous weeks, highlighted materials science and work for fusion reactors in magnetic fields and those kinds of rare earth materials and testing for other materials.
And at the micro or macro level, this points to shift to how we're going to be doing science.
So for decades, a lot of lab work's been brute force, right?
Try a big library, screen everything, then narrow it down.
So the pattern here is a little different.
We ask the model what is the most promising thing to test, then spend that lab time on that shorter list.