Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Data Driven

Ronen Dar on GPU Orchestration for Building ML Models

05 Feb 2024

Description

In this episode, our Andy Leonard and Frank La Vigne sit down with Ronen Dar, the co-founder and CTO of Run AI, to explore the world of artificial intelligence and GPU orchestration for machine learning models.Ronen shares insights into the challenges of utilizing GPUs in AI research and how Run AI's platform addresses these issues by optimizing GPU usage and providing tools for easier and faster model training and deployment. The conversation delves into the concept of fractional GPU usage, allowing multiple workloads to run on a single GPU, making expensive GPUs more accessible and cost-effective for organizations.LinksRun AI https://www.run.ai/Acquired Podcast Episode on Nvidia https://www.acquired.fm/episodes/nvidia-the-machine-learning-company-2006-2022Show Notes04:40 GPU technology enabled for cloud AI workloads.07:00 RunAI enables sharing expensive GPU resources for all.11:59 As enterprise AI matures, organizations become more savvy.15:35 Deep learning, GPUs for speed, CPUs backup.16:54 LLMs running on GPU's, exploding in market.23:29 NVIDIA created CUDA to simplify GPU use.26:21 NVIDIA's success lies in accessible technology.28:25 Solve GPU hugging with quotas and sharing.31:15 Team lead manages GPU quotas for researchers.35:51 Rapid changes in business and innovation.40:34 Passionate problem-solver with diverse tech background.43:38 Thanks for tuning in, subscribe and review.

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.