When ‘safe until proven otherwise’ becomes dangerous ... Over a century ago, in 1906, the first documented death from asbestos exposure was recorded in testimony. Yet it would take until the 1920s for widespread medical evidence to emerge, and until 2006 - 100 years later - for meaningful regulation to gain momentum with the Rotterdam Convention. The culprit? A confidence gap: industry and institutions trusted the material's benefits whilst deprioritising the evidence of its harms. New enterprise AI research has identified a strikingly similar phenomenon: 78% of organisations claim full trust in AI systems, despite only 40% having implemented governance frameworks or ethical safeguards. Yet data shows that organisations prioritising trustworthy AI see significant ROI improvements. Asbestos created a latency trap: disease took 20-60 years to manifest, making the causal link between exposure and harm almost impossible to see in real time. AI presents a different latency trap—one where harms (bias, hallucination, systemic risk) accumulate invisibly across populations and organisational timescales, often undetectable within quarterly performance reviews. Is waiting for proof of harm before building governance a luxury we can no longer afford? Profiled research: Frontier AI use case developments: Next-Generation Models Suggest Manufacturing Strategies for an AI Agent Society - https://amiko.consulting/en/the-ai-revolution-in-the-second-week-of-november-2025-paradigm-shifts-and-new-opportunities-looming-for-manufacturing/; AI-Enabled Digital Twins for Public Health and Social Policy (Indonesia – Skyral) - https://edtechhub.org/2025/11/19/ai-observatory-waypoints-and-signals-issue-24/; Semantic Digital Twins for Industrial Energy Optimisation - https://www.forbes.com/sites/feliciajackson/2025/11/04/the-rise-of-industrial-ai-from-words-to-watts/. Applied AI developments: Microsoft Frontier Firms and Agentic AI in Core Functions - https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/07/24/ai-powered-success-with-1000-stories-of-customer-transformation-and-innovation/. Trusted AI developments: Guidance for Risk Management of Artificial Intelligence Systems - https://www.edps.europa.eu/system/files/2025-11/2025-11-11_ai_risks_management_guidance_en.pdf. Feature AI development: IDC Trust-Action Gap Study - https://www.sas.com/content/dam/sasdam/documents/20250124/data-and-ai-impact-report-the-trust-imperative.pdf. #AI #EnterpriseAI #AIValue #FrontierAI #AppliedAI #TrustedAI #AIGovernance #AISafety #ResponsibleAI #AIStressTest #Learning #History #Technology #Innovation
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now