Murtaza Doctor
๐ค SpeakerAppearances Over Time
Podcast Appearances
So it actually executes.
So if you think about an AI agent, you know, all it is is there is a large language model.
It has access to a variety of tools behind the scenes.
It's triggered by a series of events or what we like to call as loops.
And it is capable of planning, executing, and actually self-correcting itself.
And the last part is what actually changes everything.
This is where we talk about how can you drive the loop, which is you perceive, where you're gathering context, which is your logs, your APIs, your metrics, your telemetry.
And then you're starting to reason about it.
This is where the LLM comes in.
You use an LLM to actually evaluate all the options you have.
And then you want to act on it.
And this is where you want to call your tools, through APIs, through scripts, through workflows.
And then the most important part is you actually are learning.
You are adjusting based on your feedback.
And just to give you an example, Michael, we had an incident internally within my SRE teams where an agent actually detected elevated latency.
And what it actually did behind the scenes is it expanded all of the ThousandEyes synthetic tests which we actually run.
It pulled the BGP routing data, which is the Border Gateway Protocol routing data.
It fetched a variety of config diffs via MCP, which is our model context protocol.
It correlated everything.
And not only that, it then went ahead and drafted a diagnostic note before even our SRE or any human could come on call and even open their Slack up.