Ruby
👤 SpeakerAppearances Over Time
Podcast Appearances
According to Marcus Webb, head of AI research at Morgan Stanley, the market reaction isn't simply to the lost revenue and business from one major player, it's from the uncertainty this introduces.
Why did they do this really?
Will other actors halt over similar concerns?
Will the regulatory environment change?
We don't know and that spooks investors.
For Anthropic itself, the damage must be inferred.
Secondary trading froze, with analysts predicting a 50-70% haircut if trading resumes which puts the losses at $150-250 billion.
We don't really know, said Webb, no one wants to be the first to bid.
The IPO is on hold indefinitely.
The chips are still falling on this one as the world debates why.
In 2023, hundreds of AI leaders, including Dario Amadei, Anthropic, Sam Altman, OpenAI, and Demis Hassabis, Google DeepMind, signed a one-sentence statement mitigating the risks of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
AI is often compared to nuclear energy.
Powerful but potentially dangerous.
Concerns typically split into abuse of a powerful technology by ill-intentioned actors, for example a dictatorial regime, or loss of control where the AI systems themselves go rogue.
Many AI leaders are on record acknowledging the danger of AI.
Could be lights out for all of us, said Altman regarding worst-case scenario.
Anthropic, OpenAI, and Google DeepMind all contain safety departments whose purpose is to keep AI safe.
Until now, it would be possible to doubt these efforts as safety washing akin to the greenwashing of companies like ExxonMobil, designed to placate employees, regulators, and the public.
After all, the safety efforts to date have not prevented the relentless march of AI progress.
That's a harder story to tell when it costs you $200 billion, if not everything, says Sarah Chen of Bernstein Research.