Andy Ellis
๐ค SpeakerAppearances Over Time
Podcast Appearances
So if you're considering fully agentic development, we should consider human in the loop, if it makes sense, when those risks necessitates it.
AI generated meta tagging may be a thing.
So if someone's going back and looking at code later, they know who has the accountability for it, or AI had the accountability for it, or tie it back to the product.
If a product owner is gonna be using AI,
make them be accountable for that code regardless of whether it's AR or not.
The thing that I find interesting, though, in the AppSec or the ProductSec world is SBOM analysis and SCA and all that stuff becomes very important because we don't know where this code is being taken from or where it's being motivated and inspired from.
So, like, that can be very important.
But at the end of the day, the company's got to decide โ
what the risk tolerance is.
Some companies may choose to ban AI code from specific databases and specific intellectual property, or some companies may open it wide open because they see the business value in it.
But I think only the last thing to think about, and Mike Johnson and I talked about this last time on the show, is if it's code, the cool thing about it is you can also do security as code.
We can do quality, risk, compliance, all that.
You can use AI against AI.
So why not have a trained AI security bot that's going to check all the AI work and use it against itself, right?
There's a lot of potential value here.
Excellent.
Great job on today's show.
This is always fun.
Andy, I appreciate the banter.
And let's do more of these fun what's worse.