Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Fire Daily

🎙️ EP 116: Just 250 Docs Can Hack a 13B AI Model?! & Google Shoe Try-Ons

10 Oct 2025

Description

What if I told you that a few hundred poisoned documents could break models as big as GPT-4 or Claude? 😵 Anthropic just proved it. Their new paper shows that just 250 samples can secretly backdoor any LLM, no matter the size. In today’s episode, we unpack this wild discovery, why it changes AI security forever, and what it means for the future of open-web training.We’ll talk about:How Anthropic’s team used 250 poisoned docs to make 13B-parameter models output gibberish on commandWhy bigger models don’t mean safer models and why scale can’t protect against poisonThe rise of TOUCAN, the open dataset from MIT-IBM that’s changing how AI agents learn real-world toolsThe new AI race: from Jony Ive’s “anti-iPhone” with OpenAI to Amazon’s Quick Suite for business automationKeywords: Anthropic, LLM security, data poisoning, backdoor attacks, TOUCAN dataset, OpenAI, Claude, Google Gemini, AI agentsLinks:Newsletter: Sign up for our FREE daily newsletter.Our Community: Get 3-level AI tutorials across industries.Join AI Fire Academy: 500+ advanced AI workflows ($14,500+ Value)Our Socials:Facebook Group: Join 261K+ AI buildersX (Twitter): Follow us for daily AI dropsYouTube: Watch AI walkthroughs & tutorials

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.