Dylan Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
I mean, it's like carp with each of you. The English is the hottest programming language, and that English is defined by a bunch of companies that primarily are in San Francisco.
I mean, it's like carp with each of you. The English is the hottest programming language, and that English is defined by a bunch of companies that primarily are in San Francisco.
I mean, it's like carp with each of you. The English is the hottest programming language, and that English is defined by a bunch of companies that primarily are in San Francisco.
Yeah, they're cultural backdoors. The thing that amplifies the relevance of culture with language models is that We are used to this mode of interacting with people in back and forth conversation. And we now have a very powerful computer system that slots into a social context that we're used to, which makes people very... We don't know the extent to which people can be impacted by that.
Yeah, they're cultural backdoors. The thing that amplifies the relevance of culture with language models is that We are used to this mode of interacting with people in back and forth conversation. And we now have a very powerful computer system that slots into a social context that we're used to, which makes people very... We don't know the extent to which people can be impacted by that.
Yeah, they're cultural backdoors. The thing that amplifies the relevance of culture with language models is that We are used to this mode of interacting with people in back and forth conversation. And we now have a very powerful computer system that slots into a social context that we're used to, which makes people very... We don't know the extent to which people can be impacted by that.
Anthropic has research on this where they... show that if you put certain phrases in at pre-training, you can then elicit different behavior when you're actually using the model because they've poisoned the pre-training data. As of now, I don't think anybody in a production system is trying to do anything like this. I think it's mostly...
Anthropic has research on this where they... show that if you put certain phrases in at pre-training, you can then elicit different behavior when you're actually using the model because they've poisoned the pre-training data. As of now, I don't think anybody in a production system is trying to do anything like this. I think it's mostly...
Anthropic has research on this where they... show that if you put certain phrases in at pre-training, you can then elicit different behavior when you're actually using the model because they've poisoned the pre-training data. As of now, I don't think anybody in a production system is trying to do anything like this. I think it's mostly...
Anthropic is doing very direct work and mostly just subtle things. We don't know what these models are going to, how they are going to generate tokens, what information they're going to represent, and what the complex representations they have are.
Anthropic is doing very direct work and mostly just subtle things. We don't know what these models are going to, how they are going to generate tokens, what information they're going to represent, and what the complex representations they have are.
Anthropic is doing very direct work and mostly just subtle things. We don't know what these models are going to, how they are going to generate tokens, what information they're going to represent, and what the complex representations they have are.
I mean, we've already seen this with recommendation systems.
I mean, we've already seen this with recommendation systems.
I mean, we've already seen this with recommendation systems.
There's no reason in some number of years that you can't train a language model to maximize time spent on a chat app. Like right now they are trained.
There's no reason in some number of years that you can't train a language model to maximize time spent on a chat app. Like right now they are trained.
There's no reason in some number of years that you can't train a language model to maximize time spent on a chat app. Like right now they are trained.
Their time per session is like two hours. Yeah. Character AI very likely could be optimizing this where it's like the way that this data is collected is naive or it's like you're presented a few options and you choose them. But there's that's not the only way that these models are going to be trained. It's naive stuff like talk to an anime girl.
Their time per session is like two hours. Yeah. Character AI very likely could be optimizing this where it's like the way that this data is collected is naive or it's like you're presented a few options and you choose them. But there's that's not the only way that these models are going to be trained. It's naive stuff like talk to an anime girl.