Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Eye On A.I.

#275 Nandan Nayampally: How Baya Systems is Fixing the Biggest Bottleneck in AI Chips (Data Flow)

31 Jul 2025

Description

What if the biggest challenge in AI isn't how fast chips can compute, but how quickly data can move? In this episode of Eye on AI, Nandan Nayampally, Chief Commercial Officer at Baya Systems, shares how the next era of computing is being shaped by smarter architecture, not just raw processing power. With experience leading teams at ARM, Amazon Alexa, and BrainChip, Nandan brings a rare perspective on how modern chip design is evolving. We dive into the world of chiplets, network-on-chip (NoC) technology, silicon photonics, and neuromorphic computing. Nandan explains why the traditional path of scaling transistors is no longer enough, and how Baya Systems is solving the real bottlenecks in AI hardware through efficient data movement and modular design. From punch cards to AGI, this conversation maps the full arc of computing innovation. If you want to understand how to build hardware for the future of AI, this episode is a must-listen. Subscribe to Eye on AI for more conversations on the future of artificial intelligence and system design. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI's Bottleneck Is Data Movement (01:26) Nandan's Background and Semiconductor Career (03:06) What Baya Systems Does: Network-on-Chip + Software (08:40) A Brief History of Computing: From Punch Cards to AGI (11:47) Silicon Photonics and the Evolution of Data Transfer (20:04) How Baya Is Solving Real AI Hardware Challenges (22:13) Understanding CPUs, GPUs, and NPUs in AI Workloads (24:09) Building Efficient Chips: Cost, Speed, and Customization (27:17) Performance, Power, and Area (PPA) in Chip Design (30:55) Partnering to Build Next-Gen Photonic and Copper Systems (32:29) Why Moore's Law Has Slowed and What Comes Next (34:49) Wafer-Scale vs Traditional Die: Where Baya Fits In (36:10) Chiplet Stacking and Composability Explained (39:44) The Future of On-Chip Networking (41:10) Neuromorphic Computing: Energy-Efficient AI (43:02) Edge AI, Small Models, and Structured State Spaces

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.