Stefano Ermon
๐ค SpeakerAppearances Over Time
Podcast Appearances
but it's not too different.
Like essentially we're able to serve the models in an OpenAI compatible way.
We support full calls.
And so people are already using our models in, you know, a variety of agentic,
frameworks, including some of the open source ones.
I think people have figured out ways to also use it in cloud code.
I think there are some wrappers that allow you to use other models.
That works.
Have you done that actually?
Have you played with OpenClaw?
I haven't, but some of my team members have and they've used Mercury, so I know for sure it works.
That's cool.
Yeah, completely agree.
I think at that point.
speed of interaction with the environment becomes the key bottleneck.
You really want to be able to use the tools that you have access to and collect the information needed to solve the task.
And the faster you interact, the faster you can take actions, collect feedback, reason, decide what to do next.
Uh, the better the experience, the better the model is going to work.
And so speed, I think we're seeing it also from other labs, people are pushing more and more.
Okay.