Jyunmi Hatcher
๐ค SpeakerAppearances Over Time
Podcast Appearances
So Mercor, a $10 billion AI recruiting startup that contracts domain experts to train AI models for companies that include OpenAI and Anthropic, confirmed on Tuesday that it had breached through a supply chain attack on light LLM.
an open source library used by AI developers worldwide.
The extortion group Lapsus claimed it obtained four terabytes of Mercor data, including source code, Slack communications, and videos of conversations between Mercor's AI systems and contractors on its platform.
So, you know, albeit what day it is today, being April 1st, but this isn't necessarily new, right?
We've been seeing different security questions come up with AI use.
What this highlights, though, is attacks specifically on the back end of the entire LLM ecosystem, getting access or attacking the data side of things, which I think is less what we normally hear.
It's always some sort of security issue when building systems.
Using an LLM or using AI to vibe code or use a code assistant to build a new program.
And that has inherent security issues.
Or, well, I guess with the Anthropic story, that's a single point of failure because that came through Axios, right, Andy?
So another significant point about the story is that the attack vector is interesting here, right?
So it's well beyond just one company, right?
So LightLLM is an open source API gateway that lets developers connect their applications over a hundred different large language model providers.
So including OpenAI, Anthropic, Google, and others.
Through a single interface,
It's downloaded 97 times per month on PYPI, the main Python package repository.
And the hacking group called Team PCP compromised the Trivi vulnerability scanner through a misconfigured GitHub repository.
actions workflow so what this means is they're also they're finding exploits within github and if you're not familiar github is like the number one platform for repositories of software projects of every type so identifying that there are um
There are exploits there as well is another concerning issue about the larger supply chain of software, LLMs, and AI development.
So a bit of a concerning story there.