Lee Cronin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And what we did in this case is we produced electron density that maximizes the electrostatic potential, so the stickiness, but minimizes what we call the steric hindrance, so the overlaps that's repulsive. So, you know, make the perfect fit. And then we then used a kind of like a chat GPT type thing to turn that electron density into what's called a smile.
And what we did in this case is we produced electron density that maximizes the electrostatic potential, so the stickiness, but minimizes what we call the steric hindrance, so the overlaps that's repulsive. So, you know, make the perfect fit. And then we then used a kind of like a chat GPT type thing to turn that electron density into what's called a smile.
A smile string is a way of representing a molecule in letters. and then we can then so it just generates them just generates them and then the other thing is then we bung that into the computer and then it just makes it
A smile string is a way of representing a molecule in letters. and then we can then so it just generates them just generates them and then the other thing is then we bung that into the computer and then it just makes it
A smile string is a way of representing a molecule in letters. and then we can then so it just generates them just generates them and then the other thing is then we bung that into the computer and then it just makes it
The robot that we've got that can basically just do chemistry. Yeah. So kind of, we've kind of got this end to end drug discovery machine where you can say, oh, you want to bind to this active site. Here you go. I mean, it's a bit leaky and things kind of break, but it's the, it's a proof of principle.
The robot that we've got that can basically just do chemistry. Yeah. So kind of, we've kind of got this end to end drug discovery machine where you can say, oh, you want to bind to this active site. Here you go. I mean, it's a bit leaky and things kind of break, but it's the, it's a proof of principle.
The robot that we've got that can basically just do chemistry. Yeah. So kind of, we've kind of got this end to end drug discovery machine where you can say, oh, you want to bind to this active site. Here you go. I mean, it's a bit leaky and things kind of break, but it's the, it's a proof of principle.
Well, the hallucinations are really great in this case, because in the case of a large language model, the hallucinations just like just make everything up to when it doesn't just make everything up, but it gives you an output that you're plausibly comfortable with and thinks you're doing probabilistically.
Well, the hallucinations are really great in this case, because in the case of a large language model, the hallucinations just like just make everything up to when it doesn't just make everything up, but it gives you an output that you're plausibly comfortable with and thinks you're doing probabilistically.
Well, the hallucinations are really great in this case, because in the case of a large language model, the hallucinations just like just make everything up to when it doesn't just make everything up, but it gives you an output that you're plausibly comfortable with and thinks you're doing probabilistically.
The problem on these electron density models is it's very expensive to solve a Schrodinger equation going up to many heavy atoms and large molecules. And so we wondered if we trained the system on up to nine heavy atoms, whether it would go beyond nine. And it did. It started to generate molecules of 12. No problem. They look pretty good.
The problem on these electron density models is it's very expensive to solve a Schrodinger equation going up to many heavy atoms and large molecules. And so we wondered if we trained the system on up to nine heavy atoms, whether it would go beyond nine. And it did. It started to generate molecules of 12. No problem. They look pretty good.
The problem on these electron density models is it's very expensive to solve a Schrodinger equation going up to many heavy atoms and large molecules. And so we wondered if we trained the system on up to nine heavy atoms, whether it would go beyond nine. And it did. It started to generate molecules of 12. No problem. They look pretty good.
And I was like, well, this hallucination I will take for free. Thank you very much. Because it just basically, this is a case where interpolation, extrapolation worked relatively well. And we were able to generate the really good molecules. And then what we were able to do here is... And this is a really good point, what I was trying to say earlier, that we were able to generate new molecules...
And I was like, well, this hallucination I will take for free. Thank you very much. Because it just basically, this is a case where interpolation, extrapolation worked relatively well. And we were able to generate the really good molecules. And then what we were able to do here is... And this is a really good point, what I was trying to say earlier, that we were able to generate new molecules...
And I was like, well, this hallucination I will take for free. Thank you very much. Because it just basically, this is a case where interpolation, extrapolation worked relatively well. And we were able to generate the really good molecules. And then what we were able to do here is... And this is a really good point, what I was trying to say earlier, that we were able to generate new molecules...
from the known data set that would bind to the host. So a new guest would bind. Were these truly novel? Not really because they were constrained by the host. Were they new to us? Yes. So I do understand, I can concede that machine learning systems, artificial intelligence systems can generate new entities, but how novel are they? It remains to be seen.
from the known data set that would bind to the host. So a new guest would bind. Were these truly novel? Not really because they were constrained by the host. Were they new to us? Yes. So I do understand, I can concede that machine learning systems, artificial intelligence systems can generate new entities, but how novel are they? It remains to be seen.
from the known data set that would bind to the host. So a new guest would bind. Were these truly novel? Not really because they were constrained by the host. Were they new to us? Yes. So I do understand, I can concede that machine learning systems, artificial intelligence systems can generate new entities, but how novel are they? It remains to be seen.