Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #34: New Military AI Systems

01 May 2024

Description

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through. OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI. Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.” When asked about their concerns with pre-deployment testing [...] ---Outline:(00:03) AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute(02:17) New Bipartisan AI Policy Proposals in the US Senate(06:35) Military AI in Israel and the US(11:44) New Online Course on AI Safety from CAIS(12:38) Links --- First published: May 1st, 2024 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.