Nick Talken
๐ค SpeakerAppearances Over Time
Podcast Appearances
And that's really exciting to me because job growth is great and we want smart scientists to be located here in the U.S.
Generally, there's two things.
So if we go back to kind of that scientific method I was talking about before, you've got one iteration of how do you go from an idea to something where you have data.
And generally, it's a failure.
So you do multiple iterations there.
And so our goal is to reduce the time per iteration and collapse the number of iterations down to as few as possible.
And if we can do those two things, then science gets faster.
And so in that example, I think each one of their iterations previous to Albert maybe took, you know,
three or four days right and so then generally they run 10 iterations or so and it's a couple months long project that they'd have to run right um and that's the old you know old way of doing it with albert because we are able to collaborate easier they can collapse that down to maybe a two-day iteration or maybe a one-day iteration and then because we can layer the ai on top of their historical data we can start to recommend the experiments that the scientists can go run that give the highest uh
information density per experiment and that sounds a little weird maybe but it's like that's the point of science when you run something you want to be on the bleeding edge how do you get the most information to inform the next experiment learn as much as possible yeah exactly and so if you do that then you can take two iterations instead of ten and so if you collapse the time per iteration you go from ten to two now you're at two days and you've launched a product or you've gotten a product that is now commercially viable which is super super exciting wow
Yeah.
So what happens is we basically, so, and that's where the machine learning comes in.
I think that this is important, especially for a technical audience that you have.
LLMs are not just the root, like, or the solve for every problem out there.
there.
I think they're really good at discovery.
They're really good at exposing information to broad audiences.
They're good at helping give contextual and reasoning through what may be complex scientific problems.
But when you want to go to an optimization where you think you have something that's pretty close, there's actually much better tools at an LLM than to do high throughput simulation.
And so in this case with that customer, they were running hundreds of thousands of simulations in minutes of time before they went into the lab with our software.