Nathaniel Whittemore
๐ค SpeakerAppearances Over Time
Podcast Appearances
The reporting is that Google co-founder Sergey Brin will be directly involved with the team.
In a recent memo, Brin told DeepMind staffers, To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers.
Now, interestingly, one nuance is that this project isn't necessarily about releasing more advanced coding models.
Instead, writes the information, Google is now putting more emphasis on models that write code the company can use internally, a strategy shift from focusing primarily on coding models for external customers.
That requires training the models on Google's code, which is important for performance because Google's private code base differs significantly from the external code it uses to train general purpose coding models.
The article noted a recent quote from cloud code developer Boris Cherny, who recently said that pretty much 100% his words of Anthropic's code is now written by AI.
By contrast, Google CFO Anat Ashkenazi said that coding agents write around half of Google's code now during their February earnings call.
Even if with the agentic coding onslaught this year, Google has once again fallen behind when it comes to the AI narrative, it seems to me that there's still a fair amount of optimism about their ability to get back into it.
Yuchen Jin writes, It's surprising to me that Google has the world's largest internal code base yet lags behind Anthropic and OpenAI in coding and agents.
I think Sergey and Founder Mode can fix it again this time.
Now, quiet as they might have felt, Google I.O.
is right around the corner in May, so I expect we'll be hearing a lot more from Gemini and the entire Google team in the pretty near future.
Over in Amazon, the company continues to hedge their bets with a big investment in Anthropic.
So far this year, Amazon's AI strategy has been in something of a transition phase, shifting away from homegrown models to focus on placing big bets on the leading labs.
They shuttered their AGI lab and instead pushed their chips in on OpenAI, committing to a $50 billion investment.
Now, it seems as though they want to own a slice of the entire segment, also committing to a $25 billion investment in Anthropic.
The deal consists of $5 billion of commitment now and an additional $20 billion tied to commercial milestones.
The companies frame the deal as an expansion of their existing partnership, which saw Amazon invest $8 billion over the past 18 months.
Functionally, the deal looks a lot like Anthropic paying for chips with equity.
Amazon will provide 5 gigawatts of compute using current and future generations of their in-house Tranium chips.