Eno Reyes
๐ค SpeakerAppearances Over Time
Podcast Appearances
Now, there's also the option of bringing your own model and bringing your own keys.
So you can choose any model.
And so people can use really any inference endpoint that supports the three major inference endpoint standards, right?
Yeah.
You can even use on your own device, right?
VLM or OLAMA.
So you can have a fully local coding agent experience with Droid if you have a powerful enough computer.
Yeah.
Now...
You also can customize beyond just, you know, I'm using this one model in this one session.
And you can do things like I would like to plan or basically build.
And we have something called spec mode, which is basically specification.
And you can plan or specify with one model, but then execute with another.
And so what that lets you do is it lets you say, I want to use an expensive or powerful model to do X task, and then a cheap model to actually execute because most of the efforts in the planning.
And I think people take advantage of this most when they might have one model for code review.
They might have one model for security review.
Their daily driver is a third model.
And then when they really need to pull out the juice, they'll use the most expensive model.
So that flexibility is super important because most model providers basically give you one, maybe two options at most.
So I'll speak on the latter first in like, will we build our own models?