Guillaume Verdon
๐ค SpeakerAppearances Over Time
Podcast Appearances
I think that anxiety is good. I think, like I said, I want the free market to create aligned AIs that are reliable. And I think that's what he's trying to do with XAI. So I'm all for it. What I am against is sort of...
Stopping, let's say, the open source ecosystem from thriving by, let's say, in the executive order, claiming that open source LMs are dual use technologies and should be government controlled. then everybody needs to register their GPU and their big matrices with the government.
Stopping, let's say, the open source ecosystem from thriving by, let's say, in the executive order, claiming that open source LMs are dual use technologies and should be government controlled. then everybody needs to register their GPU and their big matrices with the government.
Stopping, let's say, the open source ecosystem from thriving by, let's say, in the executive order, claiming that open source LMs are dual use technologies and should be government controlled. then everybody needs to register their GPU and their big matrices with the government.
I think that extra friction will dissuade a lot of hackers from contributing, hackers that could later become the researchers that make key discoveries that push us forward, including discoveries for AI safety. And so I think I just want to maintain ubiquity of opportunity to contribute to AI and to own a piece of the future.
I think that extra friction will dissuade a lot of hackers from contributing, hackers that could later become the researchers that make key discoveries that push us forward, including discoveries for AI safety. And so I think I just want to maintain ubiquity of opportunity to contribute to AI and to own a piece of the future.
I think that extra friction will dissuade a lot of hackers from contributing, hackers that could later become the researchers that make key discoveries that push us forward, including discoveries for AI safety. And so I think I just want to maintain ubiquity of opportunity to contribute to AI and to own a piece of the future.
It can't just be legislated behind some wall where only a few players get to play the game.
It can't just be legislated behind some wall where only a few players get to play the game.
It can't just be legislated behind some wall where only a few players get to play the game.
I think, again, I think if there was no one working on it, I think I would be a proponent of it. I think, again, our goal is to sort of bring balance. And obviously, a sense of urgency is a useful tool to make progress. It hacks our dopaminergic systems and gives us energy to work late into the night. I think also... having a higher purpose you're contributing to, right?
I think, again, I think if there was no one working on it, I think I would be a proponent of it. I think, again, our goal is to sort of bring balance. And obviously, a sense of urgency is a useful tool to make progress. It hacks our dopaminergic systems and gives us energy to work late into the night. I think also... having a higher purpose you're contributing to, right?
I think, again, I think if there was no one working on it, I think I would be a proponent of it. I think, again, our goal is to sort of bring balance. And obviously, a sense of urgency is a useful tool to make progress. It hacks our dopaminergic systems and gives us energy to work late into the night. I think also... having a higher purpose you're contributing to, right?
At the end of the day, it's like, what am I contributing to? I'm contributing to the growth of this beautiful machine so that we can seek to the stars. That's really inspiring. That's also a sort of neuro hack.
At the end of the day, it's like, what am I contributing to? I'm contributing to the growth of this beautiful machine so that we can seek to the stars. That's really inspiring. That's also a sort of neuro hack.
At the end of the day, it's like, what am I contributing to? I'm contributing to the growth of this beautiful machine so that we can seek to the stars. That's really inspiring. That's also a sort of neuro hack.
Yeah, I just think we have to be careful because, you know, safety is just the perfect cover for sort of centralization of power and covering up eventually corruption. I'm not saying it's corrupted now, but it could be down the line. And really... you let the argument run, there's no amount of sort of centralization of control that will be enough to ensure your safety.
Yeah, I just think we have to be careful because, you know, safety is just the perfect cover for sort of centralization of power and covering up eventually corruption. I'm not saying it's corrupted now, but it could be down the line. And really... you let the argument run, there's no amount of sort of centralization of control that will be enough to ensure your safety.
Yeah, I just think we have to be careful because, you know, safety is just the perfect cover for sort of centralization of power and covering up eventually corruption. I'm not saying it's corrupted now, but it could be down the line. And really... you let the argument run, there's no amount of sort of centralization of control that will be enough to ensure your safety.
There's always more 999s of P safety that you can gain, you know, 999.9999% safe. Maybe you want another 9, oh, please give us full access to everything you do, full surveillance. And frankly, those that Our proponents of AI safety have proposed like having a global panopticon, right? Where you have centralized perception of everything going on.