César Ramírez Sarmiento
👤 PersonAppearances Over Time
Podcast Appearances
Yeah, I mean, a lot of people talk a lot about the guardrails that we need for different AI technologies.
So the risk with all AI technologies is their dual use.
So you can use them for benefits or you can use them for harmful impact.
So viruses are...
composed primarily of proteins and they infect our cells.
With all of these AI architectures for protein design and you can think that somebody can take a given virus and then can use these AI models for protein design to improve their transmissibility or their infection rate.
So those are like hard-fought decisions
But fortunately, there have been different approaches from governments and also from companies to try to assess the risk of these models with different evaluations and try then to make sense of what will be the risk that we can have when releasing these models to the public.
So a bunch of scientists, including myself and other very well-known scientists in the realm of artificial intelligence for everything about biology, not only about protein designs, signed some guidelines that were called Responsible AI for Biodesign.
that indicate that we will do significant efforts to identify risk in the different models that we develop for different types of biodesign using artificial intelligence.
And then try to indicate those risks whenever we release the models or try to do what people call unlearning, which is try to make models to somehow not capture this handful potential when you release them to the public.
For now, you still need an expert scientist because they are not very easy to use.
But if you combine them with these large language models that allow for having a conversation with your computer without having the expertise for creating something.
So the risk over there is that any person can, in principle, ask to, for example, to one of these language models, can you please create a very harmful biological thing?
Both the UK and the US and also the European Union have AI safety institutes.
And what they do is that they evaluate the risk of using these different technologies.
And so they have these like different thresholds for determining whether it's a very high risk and we have to do something about it or it's like very low.
And then we do have to keep an eye on it, but without oversight.
I mean, I think, yeah, there is an opportunity for other countries to lead it.
There's efforts, I know, in Europe.