Gili Raanan
๐ค SpeakerAppearances Over Time
Podcast Appearances
And some of those attacks had very, very simple measures that you use, but the outcome was unbelievable. Could you give an example? If you like to take off a country and put it on its knees, that takes, using conventional methods, huge amount of resources, times, and coordinated efforts by a lot of people. If you do that through cybersecurity,
And some of those attacks had very, very simple measures that you use, but the outcome was unbelievable. Could you give an example? If you like to take off a country and put it on its knees, that takes, using conventional methods, huge amount of resources, times, and coordinated efforts by a lot of people. If you do that through cybersecurity,
You can use very simple ways like taking down name servers for their power stations, and you leave that country without power for a few days. It's not a highly sophisticated attack. You don't have to have an army of people doing that. And it can happen to any country any day.
You can use very simple ways like taking down name servers for their power stations, and you leave that country without power for a few days. It's not a highly sophisticated attack. You don't have to have an army of people doing that. And it can happen to any country any day.
Now, think about an AI agent that's able to produce thousands of scenarios like that and execute all of them simultaneously. That's the technology today. This is not the future thing.
Now, think about an AI agent that's able to produce thousands of scenarios like that and execute all of them simultaneously. That's the technology today. This is not the future thing.
The deduction model would always be as good as the data provided. Take the analogy in the drugs development industry. If you provide enough information about... a certain disease. You build an LLM that can take that patient data, that can take a lot of research and offer recipes for drugs that can, for instance, heal diseases that are considered unhealable. Let's say Alzheimer's.
The deduction model would always be as good as the data provided. Take the analogy in the drugs development industry. If you provide enough information about... a certain disease. You build an LLM that can take that patient data, that can take a lot of research and offer recipes for drugs that can, for instance, heal diseases that are considered unhealable. Let's say Alzheimer's.
You can use the same LLM and ask for recipes for drugs that can kill people. It's the same system. It's the same method. It's the same software. Now, think about LLM that's fed on data around the software infrastructure of a bank and an AI agent that you use to ask questions like, what are the potential breaches in my infrastructure and how should I protect myself? It's a very useful agent.
You can use the same LLM and ask for recipes for drugs that can kill people. It's the same system. It's the same method. It's the same software. Now, think about LLM that's fed on data around the software infrastructure of a bank and an AI agent that you use to ask questions like, what are the potential breaches in my infrastructure and how should I protect myself? It's a very useful agent.
The same agent you can ask directly or trick to answer the question, how would an attacker breach the defenses of that bank and take that bank offline? The same technologies that we use for the defense are the technology that would be used on the offense. You've seen that even if you put a lot of guardrails into the model, there are many ways to bypass those guardrails.
The same agent you can ask directly or trick to answer the question, how would an attacker breach the defenses of that bank and take that bank offline? The same technologies that we use for the defense are the technology that would be used on the offense. You've seen that even if you put a lot of guardrails into the model, there are many ways to bypass those guardrails.
And that's when AI is in a walled garden. Now you have open source models, and DeepSeek is just one example of it, where AI is not walled garden anymore. It's in the wild, and anyone can download the model and modify the model.
And that's when AI is in a walled garden. Now you have open source models, and DeepSeek is just one example of it, where AI is not walled garden anymore. It's in the wild, and anyone can download the model and modify the model.
AI would redefine cybersecurity. It would replace the old ways cybersecurity solutions were architected. The old systems of building rule-based systems and behavior-based systems. So the good guys can fix misconfigurations or patch buggy software systems. Those days are gone.
AI would redefine cybersecurity. It would replace the old ways cybersecurity solutions were architected. The old systems of building rule-based systems and behavior-based systems. So the good guys can fix misconfigurations or patch buggy software systems. Those days are gone.
And founders would have to create cybersecurity companies that are AI-first and AI-native just to have a chance to build lasting, important cybersecurity companies.
And founders would have to create cybersecurity companies that are AI-first and AI-native just to have a chance to build lasting, important cybersecurity companies.
And that's the reason you see existing players, major players, like our own portfolio companies, you see the level of effort that a major cybersecurity platform like Wiz or Sayera or Island, the level of effort they put in AI, understanding and learning AI and applying AI into their platform, that's not an additive capability to the bot, that's
And that's the reason you see existing players, major players, like our own portfolio companies, you see the level of effort that a major cybersecurity platform like Wiz or Sayera or Island, the level of effort they put in AI, understanding and learning AI and applying AI into their platform, that's not an additive capability to the bot, that's