Jyunmi Hatcher
๐ค SpeakerAppearances Over Time
Podcast Appearances
I'd love to know what the particular insights are, just so that my own use of cloud code and cloud in general might be enhanced or streamlined.
Oh, that's how it does it.
Well, I need to reframe how I approach, you know, asking questions or something along those lines.
Okay, a story from my side is a story that might not be getting a lot of coverage.
And I hope there isn't a ton of doom and gloom today.
But apparently...
There's a supply chain issue or security issue.
It's a supply chain of light LLM.
So Mercor, a $10 billion AI recruiting startup that contracts domain experts to train AI models for companies that include OpenAI and Anthropic, confirmed on Tuesday that it had breached through a supply chain attack on light LLM.
an open source library used by AI developers worldwide.
The extortion group Lapsus claimed it obtained four terabytes of Mercor data, including source code, Slack communications, and videos of conversations between Mercor's AI systems and contractors on its platform.
So, you know, albeit what day it is today, being April 1st, but this isn't necessarily new, right?
We've been seeing different security questions come up with AI use.
What this highlights, though, is attacks specifically on the back end of the entire LLM ecosystem, getting access or attacking the data side of things, which I think is less what we normally hear.
It's always some sort of security issue when building systems.
Using an LLM or using AI to vibe code or use a code assistant to build a new program.
And that has inherent security issues.
Or, well, I guess with the Anthropic story, that's a single point of failure because that came through Axios, right, Andy?
So another significant point about the story is that the attack vector is interesting here, right?