Carmine Paolino
๐ค SpeakerAppearances Over Time
Podcast Appearances
And also, you can then have all kinds of other little nuggets of pleasure when you use Ruby LLM because now it's one interface and you can use it with whatever other APIs that are out there.
So whether or not you're using Anthropic or OpenAI or...
OpenRouter or Gemini or Vertex AI or any of the 11 supported providers that we have at the moment, you don't have to know anything about the underlying details of the actual chat.
And we do all of the translation ourselves.
You know, it's basically made of two parts, right?
There is the Ruby LM Ruby interface, right?
So with all of the messages that are Ruby objects, the attachments are Ruby objects, all of that clear, plain Ruby.
And then we have the providers and the providers have an adapter interface.
So then you can, you know, have, you can develop your own provider really easily.
And they are basically comprised of two types of functions.
They are like the parse functions and the render functions.
So the parse functions, they parse whatever the provider outputs and the render functions, they render to the provider, whatever we have in the Ruby LLM objects.
We do that translation every single time we communicate with a provider.
And that allows us to even have this little magic thing that I like to say all the time, which is to even change the provider and the model during a conversation.
Because all of the conversation is actually saved in plain old Ruby objects in Ruby LLM.
That's how most APIs actually work right now.
There's only one API that doesn't work like that right now, which is the responses API by OpenAI.