Alex Heath
👤 PersonAppearances Over Time
Podcast Appearances
I guess on the regulation piece, as it relates to AI, you've been very vocal about what's happening in the EU. And you recently signed an open letter, and I believe it was saying basically that you guys just don't have clarity on consent for training, how it's supposed to work.
I guess on the regulation piece, as it relates to AI, you've been very vocal about what's happening in the EU. And you recently signed an open letter, and I believe it was saying basically that you guys just don't have clarity on consent for training, how it's supposed to work.
I guess on the regulation piece, as it relates to AI, you've been very vocal about what's happening in the EU. And you recently signed an open letter, and I believe it was saying basically that you guys just don't have clarity on consent for training, how it's supposed to work.
And I'm wondering what you think needs to happen there for things to move forward, because like MetAI is not available in Europe, new Lama models are not. Is that something you see getting resolved at all? I guess. And what would it take?
And I'm wondering what you think needs to happen there for things to move forward, because like MetAI is not available in Europe, new Lama models are not. Is that something you see getting resolved at all? I guess. And what would it take?
And I'm wondering what you think needs to happen there for things to move forward, because like MetAI is not available in Europe, new Lama models are not. Is that something you see getting resolved at all? I guess. And what would it take?
But do you understand the concern people have about training data and how it's used? And this idea that their data is being used for these models, they're not getting compensated, and the models are creating a lot of value. And I know you're giving away Lama, but you've got MetAI. I understand the frustration that people have about that.
But do you understand the concern people have about training data and how it's used? And this idea that their data is being used for these models, they're not getting compensated, and the models are creating a lot of value. And I know you're giving away Lama, but you've got MetAI. I understand the frustration that people have about that.
But do you understand the concern people have about training data and how it's used? And this idea that their data is being used for these models, they're not getting compensated, and the models are creating a lot of value. And I know you're giving away Lama, but you've got MetAI. I understand the frustration that people have about that.
I think it's a naturally bad feeling to be like, oh, my data is now being used in a new way that I have no control or compensation over. Do you sympathize with that?
I think it's a naturally bad feeling to be like, oh, my data is now being used in a new way that I have no control or compensation over. Do you sympathize with that?
I think it's a naturally bad feeling to be like, oh, my data is now being used in a new way that I have no control or compensation over. Do you sympathize with that?
What does clarity look like to you there?
What does clarity look like to you there?
What does clarity look like to you there?
But you don't see a scenario where creators get like directly compensated for the use of their content.
But you don't see a scenario where creators get like directly compensated for the use of their content.
But you don't see a scenario where creators get like directly compensated for the use of their content.
To bring this full circle where we started, as you're building augmented reality glasses and what you've learned over just the societal implications of the stuff you've built over the last decade, how are you thinking about this as it relates to glasses at scale? Because you're literally going to be augmenting reality, which is a responsibility.
To bring this full circle where we started, as you're building augmented reality glasses and what you've learned over just the societal implications of the stuff you've built over the last decade, how are you thinking about this as it relates to glasses at scale? Because you're literally going to be augmenting reality, which is a responsibility.