Jennifer Huddleston
๐ค SpeakerAppearances Over Time
Podcast Appearances
You can take it off, but it's really painful and it leaves a mark.
So I think that there are two things to think about.
There's what does this mean for anthropics specifically and what does this mean for the broader conversation around AI innovation and the innovators in the American AI sector.
For anthropics specifically, this could certainly have consequences not only when it comes to their contract with the Department of War, but we saw President Trump post on True Social calling for all agencies to stop using this product.
And what signal does this send to other organizations
whether it's contractors or other companies that are using Anthropic both here in the US as well as around the world.
More generally for the AI community, there's also likely to be concern about what does this mean that the government's willing to do in terms of stepping in if they don't like the decisions of a particular product?
What does this also mean for America's AI leaders when other countries around the world want them to make changes for the government to use their products for things that go beyond what they deem ethical?
I think when we think about American companies typically standing up to pressure from governments, we're thinking about will American companies hold true to American values when they're faced with pressure from foreign adversaries, when the Chinese or the Russian government asks them to cross bright lines that violate American values, violate the ideas of civil rights or civil liberties.
When Anthropic was faced with similar pressure from the U.S.
government, asking it to violate its ethics, to make sure to remove what it felt were necessary ethical safeguards to prevent mass surveillance or the use of its AI and autonomous weapons, it also said no.
And that's something that shows a great deal of courage, whether it's standing up to a foreign adversary and understanding the business loss there, but particularly when it's standing up to your own government, knowing that there could, in this case, be very real consequences in the market.
One of the things that makes this situation unique is this goes beyond just a mere dispute over a procurement contract because of those additional threats, the threats to either invoke the Defense Production Act or to label the company a supply chain risk.
But I think it does highlight some of the underlying concerns or debates over how should we think about protecting civil liberties and civil rights when it comes to AI use by the government.
AI can be used for wonderfully productive ways in the government just as it can in the private sector to help constituents find the services they need.
But there are also real and legitimate concerns about the way it could be abused.
Again, in this particular dispute, two of the issues were mass surveillance and autonomous weapons.
One of the places that we need clarity when it comes from policymakers is around what the government's going to do to restrain itself in its use of AI.
This will help both citizens and innovators feel more comfortable with the technology, both in the government context and beyond.
So in terms of just the internals inside, this puts the MacBook Neo about on par with an iPhone 16 Pro released in 2024.