Jyunmi
๐ค SpeakerAppearances Over Time
Podcast Appearances
And that changes who or what decides what to test next.
There's also quite an economic story underneath this, right?
So tools like Boltzgen raise the question, if strong open models for molecule design keep arriving, what does that do to companies built on selling this as a proprietary service?
Because there's been quite a few announcements over the last few months where their whole proposition is to
build AI scientists and get through this design and testing part of the system.
And what does this do for smaller labs that suddenly get access to this kind of design power?
We should also take a look at possible friction and risk, right?
So there's data and bias.
So whichever targets were well represented in the training data will get better AI help.
Targets tied to neglected diseases may lag behind, which could amplify existing health gaps.
Reproducibility, the code being open, is great, but reproducing the wet lab success needs money, time, and specific skills, and not every lab has that.
The human skills.
If AI systems take over more of the messy trial and error, we might train fewer people in that hands-on intuition.
Maybe scientists spend more time on asking good questions and stitching results together, and maybe they lose some of the feel for how experiments fail.
And then safety and governance, of course.
The same tools that design helpful binders could, in theory, be steered towards harmful targets.
Right now, most guardrails are social institutional, norms in biology, review boards, legal barriers.
And the technical guardrails inside the AI cell are still pretty light.
So...
So the question I laid is, if tools like this get good at deciding what to test next, what do you think the core job of a human scientist should be 10 years from now?