Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Exploring Information Security - Exploring Information Security

Exploring the Rogue AI Agent Threat with Sam Chehab

23 Sep 2025

Description

Summary: In a unique live recording, Timothy De Block is joined by Sam Chehab from Postman to tackle the intersection of AI and API security. The conversation goes beyond the hype of AI-created malware to focus on a more subtle, yet pervasive threat: "rogue AI agents." The speakers define these as sanctioned AI tools that, when misconfigured or given improper permissions, can cause significant havoc by misbehaving and exposing sensitive data. The episode emphasizes that this risk is not new, but an exacerbation of classic hygiene problems. Key Takeaways Defining "Rogue AI Agents": Sam Chehab defines a "rogue AI agent" as a sanctioned AI tool that misbehaves due to misconfiguration, often exposing data it shouldn't have access to. He likens it to an enterprise search tool in the early 2000s that crawled an intranet and surfaced things it wasn't supposed to. The AI-API Connection: An AI agent is comprised of six components, and the "tool" component is where it interacts with APIs. The speakers note that the AI's APIs are its "arms and legs" and are often where it gets into trouble. The Importance of Security Hygiene: The core of the solution is to "go back to basics" with good hygiene. This includes building APIs with an open API spec, enforcing schemas, and ensuring single-purpose logins for integrations to improve traceability. The Rise of the "Citizen Developer": The conversation highlights a new security vector: non-developers, or "citizen developers," in departments like HR and finance building their own agents using enterprise tools. These individuals often lack security fundamentals, and their workflows are a "ripe area for risk". AI's Role in Development: Sam and Timothy discuss how AI can augment a developer's capabilities, but a human is still needed in the process. The report from Veracode notes that AI-generated code is only secure about 45% of the time, which is about on par with human-written code. The best approach is to use AI to fix specific lines of code in pre-commit, rather than having it write entire applications. Resources & Links Mentioned Postman State of the API Report: This report, which discusses API trends and security, will be released on October 8th. The speakers tease a follow-up episode to dive into its findings. Veracode: The 2025 GenAI Code Security Report was mentioned in the discussion on AI-generated code. GitGuardian: The State of Secrets Sprawl report was referenced as a key resource. Cloudflare: Mentioned as a service for API shield and monitoring API traffic. News Sites: Sam Chehab recommends Security Affairs, The Hacker News, Cybernews, and Information Security Magazine for staying up-to-date.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.