Tim Davis
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah.
Yeah, so how we, you know, I can talk a little bit about our platform.
So how we started was we recognized at Google that one of the biggest challenges is that most software that people interact with today is built on top of what hardware manufacturers build for their chips.
So at the end of the day, you write a program, it needs to map to the silicon that it's executing on.
But the challenge is every hardware manufacturer comes out with its own stack.
So, you know, NVIDIA has CUDA, AMD has ROKM.
I mean, you could go through all the different silicon providers.
So where we started was to say, is there a world where we could build an independent software stack that enables us to bring up hardware without needing to rely on...
on the vendors, you know, libraries and infrastructure.
And so that was sort of the big challenge.
So one of the things we built, and I say this lovingly that, uh, that, that Chris Lightner, my, my co-founder, um, he doesn't need much, um, incentive to build a new programming language.
He, uh, he loves, you know, I mean, this has been in many ways, his, his last work, but, uh,
So what we realized, though, was the problem was, was there a way to build an abstraction where we could essentially write programs that became portable by nature?
And there's a lot to that.
I could go into more detail.
But fundamentally, the concept is, could you write software, particularly AI software, very low-level operations, that would enable you to go across different types of hardware easily?
Kind of a translator, sort of?
Yeah, except it's a brand new programming language.
So we actually rely on some technology we built at Google called MLIR, which stands for, and this might go too low level, but it stands for multi-level intermediate representation.
So it's about building a representation of a program that can go across different types of silicon.