Ruby
👤 SpeakerAppearances Over Time
Podcast Appearances
The EU AI Office has announced an accelerated review of frontier model provisions contained in the EU AI Act and has invited Anthropic to brief the Independent Scientific Panel.
The UK AI Security Institute, operational since 2024, has offered to independently verify anthropic safety concerns.
A joint statement was issued by five nations – the UK, France, Germany, Canada, and South Korea – calling for emergency negotiations on frontier AI safety to establish a binding international framework for frontier AI development building on the Bletchley Declaration signed in 2023.
The statement begins, at Bletchley, 28 nations agreed that frontier AI poses profound risks.
That was a statement of concern.
Today, one of the world's leading AI companies has put hundreds of billions of dollars behind that concern.
It is time for the international community to match their courage with action.
There's an image here.
And perhaps of greatest significance, China's foreign ministry has issued a carefully worded statement expressing deep concern about the risks identified by anthropic and calling for strengthened international cooperation on safe AI development under the framework of the United Nations.
Skeptics might say that China would equally express this sentiment whether or not it intended to slow their own AI development, but it is consistent with China's posture at the UN debates last September.
At the UN Security Council debate, the US was the sole dissenter against international coordination around AI, with OSTP Director Michael Kratzios explicitly rejecting centralized control and global governance of AI.
China's sincerity is untested, but if the US reconsiders, it would appear that China is willing to come to the negotiating table.
Back at home, one can assume the competition has been celebrating.
OpenAI CEO Sam Altman posted on X, I commend Dario and Anthropic for acting in line with their conscience and best belief about what is best for humanity.
We are committed to the same here at OpenAI.
Fortunately, I have confidence in our people and approaches for creating AI beneficial for all humanity.
If any Anthropic staff remain similarly hopeful, our doors are open, even for those who once left us.
There's an image here.
A spokesperson for Google DeepMind said that while the company had not yet encountered anything to give them anthropic's level of concern, they took the matter seriously and are in talks with anthropic researchers to understand the risks that are informing the poor's decision.
Elon Musk, head of XAI, simply posted, LOL, you can trust Grok.