Will Douglas Heaven
๐ค SpeakerAppearances Over Time
Podcast Appearances
And not only is that kind of weird conceptually, anyway, we've made things that everyone is now using that nobody really understands.
But if you want to fix their flaws, if you want to stop them bullshitting, if you want to stop them being unsafe and generating things that we don't want them to,
then, yeah, we need a better handle on what's going on inside them.
So that's the motivation for why people are now studying these things.
It's kind of back to front, really.
Usually in the history of technology, we have a really good understanding of something, and then we go and make it, right?
You need to understand something before you make it.
Exactly.
Something that's commonly said is that the theory of modern AI lags behind the engineering know-how.
The fact that we can make these things and the fact that they work so well is astonishing, but it's engineering best practice.
If you imagine this edifice that has been built by generations of researchers coming along and
If you're sitting down to build a new model, then you know from the history of people who have gone before what works, what doesn't.
We know how to do it, but we don't know the why so much.
There's lots of approaches to that why.
One of them is
sort of trying to peer inside these models as they work.
And that was why, you know, I sort of compared, to give a sense of the size, I talked about it, you know, if you laid it out, it would cover all of San Francisco, but this sense of them being sort of vast aliens
I'm not the first person to sort of compare them to aliens.
I don't know if listeners know of a guy called Geoffrey Hinton, but he was one of the pioneers, one of the godfathers, so-called, of AI.
And he sort of hit public consciousness more a few years ago when he suddenly decided, actually, I'm scared of this stuff that I've built.