Nick Talken
๐ค SpeakerAppearances Over Time
Podcast Appearances
And then they were able to go there and say, hey, out of a 10,000 or 100,000 simulations, I think that these two experiments are the most
the most likely to be successful.
And that's where the value comes in.
So it's really about finding the right tool for the right job.
And I think in today's world, if an LLM is a hammer, everything looks like a nail.
And that's not necessarily the case.
Exactly.
there's a transformer sitting inside of it, but it's also, there's a neural net that sits on top of that as well.
So there's, and there's other ways that we basically vectorize what that, how do you encode a chemical structure into a latent space that basically represents what that chemistry is.
But honestly, that's a huge data set, 15 million chemical structures.
That's the entire public knowledge.
That company we were just talking about that went from three, three months to two days.
They had 30 historical experiments that we were basing that
recommendation of simulation on.
So this is small data that you have to be able to take and still give credible simulated experiments for.
Yeah, so we use all the foundational LLM models out there.
We're not in the business of building LLM foundational models, but we're in the business of doing chemistry domain-specific foundational models.
And so that's, again, where you have to pair these things up.
And I think that a lot of the reason that this problem hasn't been tackled is because there hasn't been this concerted effort to go solve it for one domain.
I think there's been a lot of people that are like, hey, science, let's go try to solve, you know, AI for science.