Gwern Branwen
๐ค PersonAppearances Over Time
Podcast Appearances
Yeah, I think at this point it mostly is just money that's stopping me.
I probably should bite the bullet and just move anyway.
But I'm a miser at heart, and I hate thinking of how many months of writing runway I'd have to give up for each month in San Francisco.
If someone wanted to give me, I don't know, 50K to 100K a year to move to SF and continue writing full-time like I do now, I'd take it in a heartbeat.
Until then, I'm still trying to psych myself up into a move.
I'm going to say that if you exclude capability from that, AI models are already much more diverse cognitively than humans are.
I think different LLMs think in very distinct ways that you can tell right away from a sample of them.
So an LLM operates nothing like a GAN.
A GAN also is totally different from VAEs.
They have totally different latent spaces, especially in the lower end where they're small or bad models.
They have wildly different artifacts and errors in a way that we just wouldn't see with humans.
I think humans are really very quite similar in writing and attitude compared to these absurd outputs of different kinds of models.
Really?
Yeah, but I mean, this is all very heavily tuned, right?
So now you're restricting it to relatively recent LLMs with everyone riding on each other's coattails, not training on the exact same data.
So I think this is a situation like much closer to if they were identical twins.
If I'm, you know, I'm not restricting myself to just LLMs and I compare the wide diversity of say like image generation models that we've had, they often have totally different ways, right?
Some of them seem as similar to each other as ants do to beavers.
I think within LLMs, I would agree that there has been a massive loss of diversity.
Things used to be way more diverse within like among LLMs.