Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Two Voice Devs

Episode 190 - Google Gemma's Tortoise and Hare Adventure

11 Apr 2024

Description

Embark on a wild race with Gemma as we explore the exciting (and sometimes slow) world of running Google's open-source large language model! We'll test drive different methods, from the leisurely pace of Ollama on a local machine to the speedier Groq platform. Join us as we compare these approaches, analyzing performance, costs, and ease of use for developers working with LLMs. Will the tortoise or the hare win this race? Learn more: * Model card: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 * Ollama: https://ollama.com/ * LangChain.js with Ollama: https://js.langchain.com/docs/integrations/llms/ollama * Groq: https://groq.com/ Timestamps: 0:00:00 - Introduction 0:03:05 - Getting to Know Gemma: Exploring the Model Card 0:05:30 - Vertex AI Endpoint: Fast Deployment, But at What Cost? 0:13:40 - Ollama: The Tortoise of Local LLM Hosting 0:17:40 - LangChain Integration: Adding Functionality to Ollama 0:21:44 - Groq: The Hare of LLM Hardware 0:26:06 - Comparing Approaches: Speed vs. Cost vs. Control 0:27:35 - Future of Open LLMs and Google Cloud Next #GemmaSprint This project was supported, in part, by Cloud Credits from Google

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.