Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Data Neighbor Podcast

Ep16: AI is Breaking the Internet - for Better or Worse

04 Mar 2025

Description

AI is changing everything—including how we moderate content online. In this episode of the Data Neighbor Podcast, we sit down with Sugin Lou, a Staff Data Scientist at Cash App and former Nextdoor AI trust & safety expert, to discuss the challenges of AI content moderation, misinformation, and trust & safety in the era of LLMs.If you care about AI trust, AI policy, risk of AI, and AI governance, this episode is for you.More about this episode:What is content moderation? How does AI impact trust & safety? From Facebook moderation to moderation bots and comment moderation, companies rely on AI-powered moderation tools to detect AI-generated content, deepfakes, misinformation, and harmful speech. But is AI moderation really working, or is it just scaling misinformation at an unprecedented rate?AI Misinformation & Risk Management:With AI-generated content, fake AI identities, and deepfakes spreading faster than ever, AI-powered disinformation is becoming a serious issue. We explore how AI risk management, AI governance, and AI regulation are trying to catch up before AI trust is lost forever.Trust & Safety in AI:How do platforms like Facebook, YouTube, and Nextdoor determine what content gets removed? How does the moderation process work? And what are the hidden risks of AI trust & safety failures?Evaluating AI Models for Trust & Safety:How do companies evaluate LLMs and ensure AI-generated content isn’t spreading misinformation? We discuss the latest in AI safety, LLM evaluation, and how companies like OpenAI, Google, and Anthropic are handling AI fraud, AI accountability, and AI disinformation.Key Topics Covered:-What is content moderation? AI’s role in trust & safety-AI moderation bots vs. human moderation-Facebook moderation & the future of AI content filtering-The hidden risks of AI-generated content & deepfakes-How AI is breaking the internet—for better or worse-AI misinformation detection & AI disinformation at scale-AI fraud, risk assessment, and AI accountability-How AI safety teams are responding to AI threats-With AI moderation tools, chat moderation, and content filtering AI, tech companies are trying to prevent AI-powered misinformation while balancing --AI ethics, AI regulation, and free speech. But can AI content moderation actually keep up?Connect with us!Sugin Lou: https://www.linkedin.com/in/sugin-lou/ Hai Guan: https://www.linkedin.com/in/hai-guan-6b58a7a/Sravya Madipalli: https://www.linkedin.com/in/sravyamadipalli/Shane Butler: https://www.linkedin.com/in/shaneausleybutler/#AI #ArtificialIntelligence #MachineLearning #AIContentModeration #FacebookModeration #ModerationBot #AITrust #AIAccountability #AIMisinformation #Deepfake #AIRegulation #TrustAndSafety #GenerativeAI #LLMEvaluation #AITrustAndSafety #AICompanions #AIEthics #AIModerationTools #WhatIsContentModeration

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.