Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Adversary Universe Podcast

Prompted to Fail: The Security Risks Lurking in DeepSeek-Generated Code

20 Nov 2025

Description

CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%. Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings. The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.  It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems. Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI. 

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.