In this episode of Agents of Tech, Stephen Horn and Autria Godfrey explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?Featuring insights from:Dr. Shyam Sundar, Director of the Center for Socially Responsible AI at Penn State, who discusses the role of ethics and trust in AI systems.Dr. Duncan Eddy, Executive Director of the Stanford Center for AI Safety, who shares lessons from aerospace safety and how they apply to AI.Join us as we examine the balance between technological advancement and safety, explore the role of regulation, and dive into the psychology of trust in AI. With perspectives on global AI trends, cultural differences in trust, and what the future holds, this is a must-watch for anyone curious about AI's impact on our society. #ArtificialIntelligence #AISafety #AITrust #EthicalAI #FutureOfAI #MachineLearning #AIFuture #AIInnovation #TechEthics #AgentsOfTech00:00 - Welcome to Agents of TechStephen Horn and Autria Godfrey introduce the episode, broadcasting from London and Washington, D.C., and pose today’s critical question: Can we trust AI?02:15 - AI in Our Lives: Benefits and RisksA discussion on how AI is rapidly transforming industries like healthcare, finance, education, and transportation. But with this integration come concerns about ethics, bias, and safety.05:30 - Ethical Implications of AIExploring the challenges of making AI systems socially accountable and the ethical dilemmas arising from unchecked AI development.10:00 - Conversation with Dr. Shyam SundarDr. Sundar, Director of the Center for Socially Responsible AI at Penn State, explains how AI’s conversational nature impacts trust and how personalization can lead to both engagement and misplaced trust.15:45 - Cultural Differences in AI TrustA fascinating look at how different cultures approach and trust AI systems, highlighting the global nature of AI challenges.20:00 - Dr. Duncan Eddy on AI Safety FrameworksDr. Eddy, Executive Director of the Stanford Center for AI Safety, draws parallels between aerospace safety systems and AI, offering insights into incremental safety improvements and regulation.25:30 - Can Regulation Keep Up with AI?A discussion on global efforts like the EU AI Safety Act and challenges in regulating both AI development and deployment, especially in high-risk applications.30:15 - How to Verify AI OutputsExamining methods like adaptive stress testing and formal verification to improve AI reliability and avoid catastrophic errors in fields such as medicine and finance.35:00 - The Future of AI Safety and TrustClosing thoughts on how AI safety research is racing to keep up with innovation, the importance of fostering a culture of safety, and ensuring trustworthiness as AI becomes ubiquitous.38:00 - What’s Next on Agents of Tech?A sneak peek at the next episode, where the focus will shift to deepfakes and cybersecurity with experts from NYU and the University of Buffalo. Trust in Artificial Intelligence
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 11PM EST
08 Dec 2025
NPR News Now
NPR News: 12-07-2025 10PM EST
08 Dec 2025
NPR News Now
Meidas Health: AAP President Strongly Pushes Back on Hepatitis B Vaccine Changes
08 Dec 2025
The MeidasTouch Podcast
Democrat Bobby Cole Discusses Race for Texas Governor
07 Dec 2025
The MeidasTouch Podcast
Fox News Crashes Out on Air Over Trump’s Rapid Fall
07 Dec 2025
The MeidasTouch Podcast