Dylan Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
The more progress that AI makes or the higher the... derivative of AI progress is, especially. Because NVIDIA is in the best place. The higher the derivative is, the sooner the market's going to be bigger and expanding. And NVIDIA is the only one that does everything reliably right now.
The more progress that AI makes or the higher the... derivative of AI progress is, especially. Because NVIDIA is in the best place. The higher the derivative is, the sooner the market's going to be bigger and expanding. And NVIDIA is the only one that does everything reliably right now.
The more progress that AI makes or the higher the... derivative of AI progress is, especially. Because NVIDIA is in the best place. The higher the derivative is, the sooner the market's going to be bigger and expanding. And NVIDIA is the only one that does everything reliably right now.
Who historically has been a large NVIDIA customer.
Who historically has been a large NVIDIA customer.
Who historically has been a large NVIDIA customer.
I want to jump in. How much was the scale? I think there's been some number, like some people that are higher level economics people understanding, say that as you go from 1 billion of smuggling to 10 billion, it's like you're hiding certain levels of economic activity.
I want to jump in. How much was the scale? I think there's been some number, like some people that are higher level economics people understanding, say that as you go from 1 billion of smuggling to 10 billion, it's like you're hiding certain levels of economic activity.
I want to jump in. How much was the scale? I think there's been some number, like some people that are higher level economics people understanding, say that as you go from 1 billion of smuggling to 10 billion, it's like you're hiding certain levels of economic activity.
And that's the most reasonable thing to me is that there's going to be some level where it's so obvious that it's easier to find this economic activity.
And that's the most reasonable thing to me is that there's going to be some level where it's so obvious that it's easier to find this economic activity.
And that's the most reasonable thing to me is that there's going to be some level where it's so obvious that it's easier to find this economic activity.
Chips are highest value per kilogram, probably by far. I have another question for you, Dylan. Do you track model API access internationally? How easy is it for Chinese companies to use hosted model APIs from the U.S. ?
Chips are highest value per kilogram, probably by far. I have another question for you, Dylan. Do you track model API access internationally? How easy is it for Chinese companies to use hosted model APIs from the U.S. ?
Chips are highest value per kilogram, probably by far. I have another question for you, Dylan. Do you track model API access internationally? How easy is it for Chinese companies to use hosted model APIs from the U.S. ?
Distillation is standard practice in industry, whether or not if you're at a closed lab where you care about terms of service and IP closely, you distill from your own models. If you are a researcher and you're not building any products, you distill from the opening eye.
Distillation is standard practice in industry, whether or not if you're at a closed lab where you care about terms of service and IP closely, you distill from your own models. If you are a researcher and you're not building any products, you distill from the opening eye.
Distillation is standard practice in industry, whether or not if you're at a closed lab where you care about terms of service and IP closely, you distill from your own models. If you are a researcher and you're not building any products, you distill from the opening eye.
We've talked a lot about training language models. They are trained on text. In post-training, you're trying to train on very high-quality text that you want the model to match the features of, or if you're using RL, you're letting the model find its own thing. But for supervised fine-tuning, for preference data, you need to have some completions what the model is trying to learn to imitate.
We've talked a lot about training language models. They are trained on text. In post-training, you're trying to train on very high-quality text that you want the model to match the features of, or if you're using RL, you're letting the model find its own thing. But for supervised fine-tuning, for preference data, you need to have some completions what the model is trying to learn to imitate.