Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI Safety Newsletter

AISN #4: AI and cybersecurity, persuasive AIs, weaponization, and Hinton talks AI risks.

02 May 2023

Description

Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online. How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent [...] ---Outline:(00:11) Cybersecurity Challenges in AI Safety(02:48) Artificial Influence: An Analysis Of AI-Driven Persuasion(05:37) Building Weapons with AI(07:47) Assorted Links --- First published: May 2nd, 2023 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-4 --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.