George Sivulka
👤 PersonAppearances Over Time
Podcast Appearances
I think the one thing that people in my position will always tell you is that the cost of intelligence will go to zero. The cost of intelligence will go to zero. I think that since Hebbia started, the cost of inference over a fixed number of parameters has decreased by seven orders of magnitude in four years. And so I genuinely believe that scaling compute is like a no brainer.
I think the one thing that people in my position will always tell you is that the cost of intelligence will go to zero. The cost of intelligence will go to zero. I think that since Hebbia started, the cost of inference over a fixed number of parameters has decreased by seven orders of magnitude in four years. And so I genuinely believe that scaling compute is like a no brainer.
And yes, we run more large language model calls than anyone might even say would ever be necessary. But we have the best accuracy in the business. We can answer much more complex problems. We're driving real value for enterprises. And I actually think that every single quarter, like our margin goes, we're not spending money fast enough.
And yes, we run more large language model calls than anyone might even say would ever be necessary. But we have the best accuracy in the business. We can answer much more complex problems. We're driving real value for enterprises. And I actually think that every single quarter, like our margin goes, we're not spending money fast enough.
And yes, we run more large language model calls than anyone might even say would ever be necessary. But we have the best accuracy in the business. We can answer much more complex problems. We're driving real value for enterprises. And I actually think that every single quarter, like our margin goes, we're not spending money fast enough.
Ultimately, the model layer, and I think this is not a hot take anymore. I've been saying it for a few years, but I think it will become commoditized. I think that a lot of value will accrue at the hardware layer. And we could talk about what that means for NVIDIA, especially as NVIDIA has a stranglehold on training, but not as much stranglehold on inference.
Ultimately, the model layer, and I think this is not a hot take anymore. I've been saying it for a few years, but I think it will become commoditized. I think that a lot of value will accrue at the hardware layer. And we could talk about what that means for NVIDIA, especially as NVIDIA has a stranglehold on training, but not as much stranglehold on inference.
Ultimately, the model layer, and I think this is not a hot take anymore. I've been saying it for a few years, but I think it will become commoditized. I think that a lot of value will accrue at the hardware layer. And we could talk about what that means for NVIDIA, especially as NVIDIA has a stranglehold on training, but not as much stranglehold on inference.
And so you might actually see other chip makers actually start to their chips start to be used in a more meaningful way because CUDA is what all ML scientists were trained on in their PhDs. But then inference doesn't matter kind of what you're using. And I think it will be the infrastructure layer and then actually the application or agent layer that will accrue the most value.
And so you might actually see other chip makers actually start to their chips start to be used in a more meaningful way because CUDA is what all ML scientists were trained on in their PhDs. But then inference doesn't matter kind of what you're using. And I think it will be the infrastructure layer and then actually the application or agent layer that will accrue the most value.
And so you might actually see other chip makers actually start to their chips start to be used in a more meaningful way because CUDA is what all ML scientists were trained on in their PhDs. But then inference doesn't matter kind of what you're using. And I think it will be the infrastructure layer and then actually the application or agent layer that will accrue the most value.
I think it might. There's probably fewer players and more entrenched players in cloud. And ultimately, I think those players honestly kind of have like an OPEC oligopoly where they can control pricing. I just think that ultimately cloud is actually more complex than training larger and larger models.
I think it might. There's probably fewer players and more entrenched players in cloud. And ultimately, I think those players honestly kind of have like an OPEC oligopoly where they can control pricing. I just think that ultimately cloud is actually more complex than training larger and larger models.
I think it might. There's probably fewer players and more entrenched players in cloud. And ultimately, I think those players honestly kind of have like an OPEC oligopoly where they can control pricing. I just think that ultimately cloud is actually more complex than training larger and larger models.
Absolutely. Whoever has the best models will continue to attract the right amount of investment. The different thing about clouds too, though, is that the cost of switching is much higher. So to refine my earlier point, I can switch models readily. I think there's even entire businesses now.
Absolutely. Whoever has the best models will continue to attract the right amount of investment. The different thing about clouds too, though, is that the cost of switching is much higher. So to refine my earlier point, I can switch models readily. I think there's even entire businesses now.
Absolutely. Whoever has the best models will continue to attract the right amount of investment. The different thing about clouds too, though, is that the cost of switching is much higher. So to refine my earlier point, I can switch models readily. I think there's even entire businesses now.
There will be an entire industry of being able to switch models from open AI to anthropic when open AI goes down. But to switch clouds is like for... any substantially sized startup, like a $10 million to $20 million investment just to switch. It's almost always never worth it. It's much, much, much stickier. Whereas here, it's a very simple API key. It's very simple to switch models.
There will be an entire industry of being able to switch models from open AI to anthropic when open AI goes down. But to switch clouds is like for... any substantially sized startup, like a $10 million to $20 million investment just to switch. It's almost always never worth it. It's much, much, much stickier. Whereas here, it's a very simple API key. It's very simple to switch models.
There will be an entire industry of being able to switch models from open AI to anthropic when open AI goes down. But to switch clouds is like for... any substantially sized startup, like a $10 million to $20 million investment just to switch. It's almost always never worth it. It's much, much, much stickier. Whereas here, it's a very simple API key. It's very simple to switch models.