Ray Fernando
👤 PersonAppearances Over Time
Podcast Appearances
There's additional options at the bottom here, which is really nice. So you can actually have this read out loud to you. So if you're a person suffering maybe from dyslexia or you actually prefer audio, you can have that for you. This will give you some information there. You can continue the response. Sometimes if you have too much information, it still needs to continue going.
So you hit the continue and it'll just continue on or regenerate the responses. So that's kind of some of the basics there. So yeah. Um, so this is the output of this model and I'm fairly impressed for being a 7 billion parameter model at running locally on my machine, uh, that it took that entire transcript and did this analysis type of thing. That's I'd say is pretty close to the bigger model.
So you hit the continue and it'll just continue on or regenerate the responses. So that's kind of some of the basics there. So yeah. Um, so this is the output of this model and I'm fairly impressed for being a 7 billion parameter model at running locally on my machine, uh, that it took that entire transcript and did this analysis type of thing. That's I'd say is pretty close to the bigger model.
So you hit the continue and it'll just continue on or regenerate the responses. So that's kind of some of the basics there. So yeah. Um, so this is the output of this model and I'm fairly impressed for being a 7 billion parameter model at running locally on my machine, uh, that it took that entire transcript and did this analysis type of thing. That's I'd say is pretty close to the bigger model.
And, um, and in terms of details, it's not as detailed as the other one, if we kind of take a look. So like, The previous, this is one that came out with before, you know, with this nice big blog post type of thing. So it's pretty good and it's running, you know, locally. I can run this on the plane as far as that. So yeah, so to get started, basically, again, it's just open web UI.
And, um, and in terms of details, it's not as detailed as the other one, if we kind of take a look. So like, The previous, this is one that came out with before, you know, with this nice big blog post type of thing. So it's pretty good and it's running, you know, locally. I can run this on the plane as far as that. So yeah, so to get started, basically, again, it's just open web UI.
And, um, and in terms of details, it's not as detailed as the other one, if we kind of take a look. So like, The previous, this is one that came out with before, you know, with this nice big blog post type of thing. So it's pretty good and it's running, you know, locally. I can run this on the plane as far as that. So yeah, so to get started, basically, again, it's just open web UI.
There is a getting started. It's literally a couple steps to run. Make sure you have Docker installed there. And then Ollama is going to show you all the different models. So if you go to the models, you'll see kind of stuff that's popular and trending right now. And that'll kind of get you some of that as well as far as getting started. There is, you know, we also talked about Fireworks AI.
There is a getting started. It's literally a couple steps to run. Make sure you have Docker installed there. And then Ollama is going to show you all the different models. So if you go to the models, you'll see kind of stuff that's popular and trending right now. And that'll kind of get you some of that as well as far as getting started. There is, you know, we also talked about Fireworks AI.
There is a getting started. It's literally a couple steps to run. Make sure you have Docker installed there. And then Ollama is going to show you all the different models. So if you go to the models, you'll see kind of stuff that's popular and trending right now. And that'll kind of get you some of that as well as far as getting started. There is, you know, we also talked about Fireworks AI.
So that's Fireworks. It's a good resource for you to, you know, go take a look and put that model in. So like if you want to put that model into your Ollama, you would kind of do the same thing here. So go to user and then you go to admin panel. And then you would go to settings up here. And then from the settings, you're going to go ahead and hit connections.
So that's Fireworks. It's a good resource for you to, you know, go take a look and put that model in. So like if you want to put that model into your Ollama, you would kind of do the same thing here. So go to user and then you go to admin panel. And then you would go to settings up here. And then from the settings, you're going to go ahead and hit connections.
So that's Fireworks. It's a good resource for you to, you know, go take a look and put that model in. So like if you want to put that model into your Ollama, you would kind of do the same thing here. So go to user and then you go to admin panel. And then you would go to settings up here. And then from the settings, you're going to go ahead and hit connections.
And so what you'll do is go ahead and hit the little plus connection. And so you have to put in the base URL and you'll also have to put in the API key. So the base URL here for fireworks is this specifically here. It says api.fireworks.ai slash inference slash v1. In the example documents, you'll see slash chat slash completions and things.
And so what you'll do is go ahead and hit the little plus connection. And so you have to put in the base URL and you'll also have to put in the API key. So the base URL here for fireworks is this specifically here. It says api.fireworks.ai slash inference slash v1. In the example documents, you'll see slash chat slash completions and things.
And so what you'll do is go ahead and hit the little plus connection. And so you have to put in the base URL and you'll also have to put in the API key. So the base URL here for fireworks is this specifically here. It says api.fireworks.ai slash inference slash v1. In the example documents, you'll see slash chat slash completions and things.
You don't need those because that's part of the OpenAI framework is that you just put everything up to v1 and then you'll generate an API key from that model. over in Fireworks, so AI. So if you go to the model here in Fireworks and you go to your name, and then if you go to API keys, once you go to API keys here, you just hit create API key and that'll pop up.
You don't need those because that's part of the OpenAI framework is that you just put everything up to v1 and then you'll generate an API key from that model. over in Fireworks, so AI. So if you go to the model here in Fireworks and you go to your name, and then if you go to API keys, once you go to API keys here, you just hit create API key and that'll pop up.
You don't need those because that's part of the OpenAI framework is that you just put everything up to v1 and then you'll generate an API key from that model. over in Fireworks, so AI. So if you go to the model here in Fireworks and you go to your name, and then if you go to API keys, once you go to API keys here, you just hit create API key and that'll pop up.
And that's the key that you want to put in there. Similar to Grok Cloud, you just go ahead and hit create API key. So once you go to console.grok.com, There's an API key section here. And then you'll want to hit create API key. And that'll pop up a dialog with those API keys. And so that endpoint will look something like this over here.