Trenton Bricken
๐ค SpeakerAppearances Over Time
Podcast Appearances
Yeah.
And very involved in this effort.
And so I'm less sympathetic to or will like.
You just said they're wrong.
to these other approaches, especially because our recent work has been so successful.
Yeah.
Um, I do, before I forget it though, I do have one, um, thread on future universality that you, you might want to have in.
Um, so there, there are some really interesting behavioral, um, evolutionary biology experiments on like, should humans learn a real representation of the world or not?
Um, you could imagine a world in which we saw all venomous animals as like flashing neon pink, a world in which we survive better.
And so it would make sense for us to not have a realistic representation of the world.
Um,
And there's some work where they'll simulate little basic agents and see if the representations they learn map to the tools they can use and the inputs they should have.
And it turns out if you have these little agents perform more than a certain number of tasks,
given these basic tools and objects in the world, then they will learn a like ground truth representation because like there are so many possible use cases that you need for these base objects that you actually want to learn what the object actually is and not some like cheap visual heuristic or other thing.
And so to the extent that we are doing and we haven't talked at all about, like, for instance, free energy principle or predictive coding or anything else, but like to the extent that all living organisms are trying to, like, actively predict what comes next and form like a really accurate world model, it wouldn't surprise me or I'm optimistic that we are learning genuine features about the world that are good for modeling it.
And our language models will do the same, at least especially because we're training them on human data and human text.
I think that's the, this is kind of why I bring this up as like the optimistic take.
Predicting the internet is very different from what we're doing though, right?
Like the models are way better at predicting next tokens than we are.
They're trained on so much garbage.