Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #26: National Institutions for AI Safety

15 Nov 2023

Description

Also, Results From the UK Summit, and New Releases From OpenAI and xAI. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.This week's key stories include: The UK, US, and Singapore have announced national AI safety institutions. The UK AI Safety Summit concluded with a consensus statement, the creation of an expert panel to study AI risks, and a commitment to meet again in six months. xAI, OpenAI, and a new Chinese startup released new models this week. UK, US, and Singapore Establish National AI Safety InstitutionsBefore regulating a new technology, governments often need time to gather information and consider their policy options. But during that time, the technology may diffuse through society, making it more difficult for governments to intervene. This process, termed the Collingridge Dilemma, is a fundamental challenge in technology policy.But recently [...] ---Outline:(00:36) UK, US, and Singapore Establish National AI Safety Institutions(03:53) UK Summit Ends with Consensus Statement and Future Commitments(05:39) New Models From xAI, OpenAI, and a New Chinese Startup(09:28) Links --- First published: November 15th, 2023 Source: https://newsletter.safe.ai/p/national-institutions-for-ai-safety --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.