Menu
Sign In Search Podcasts Charts Entities Add Podcast API Pricing
Podcast Image

Code Conversations

Prompt Injection: When Hackers Befriend Your AI

01 Apr 2025

Description

This is a technical presentation where we'll look at attacks on implementations of Large Language Models (LLMs) used for chatbots, sentiment analysis, and similar applications. Serious prompt injection vulnerabilities can be used by adversaries to completely weaponize your AI against your users.We will look at how so-called "prompt injection" attacks occur, why they work, different variations like direct and indirect injections, and then see if we can find good solutions on how to mitigate those risks. We'll also learn how LLMs are "jailbroken" to ignore their alignment and produce dangerous content.LLMs are not brand new, but we know that their use will increase drastically in the next few years, and therefore it is important to take security seriously by considering the risks involved before using AI for sensitive operations.by: Vetle HjelleRef: https://www.youtube.com/watch?v=S5MKPtRpVpY

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

No transcription available yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.