Sean Carroll
๐ค SpeakerAppearances Over Time
Podcast Appearances
We have to sort of have sub-goals along the way that give us an approximately pretty good trajectory.
There's always this question, Bayesian reasoning, because the whole picture is you have some prior probabilities, you get some data, you calculate a likelihood function, you update your priors.
But so then where did the priors come from?
Is that question involved here?
Like where do human beings actually have their rough feelings about the plausibility of different propositions?
So we're not blank slates, right?
I mean, I guess various thinkers from Kant to Noam Chomsky have said that like, yeah, we're born with some ideas in our heads.
And presumably we're a lot better now in the 21st century at teasing out which of the ideas we do come born with and which we pick up along the way.
Yeah.
And is there a thought that we should design our AIs similarly?
I mean, it seems like the lesson from connectionist approaches to AIs that have led to large language models, et cetera, is the human beings have done all that work.
We can just let the AIs be blank slates and train them on a huge amount of data.
And so the way that this works- Giving it a head start in a little sense.
So my limited knowledge of this stuff goes back to AlphaGo and AlphaZero that was the chess playing program.
And I'm told that those programs did better if they never were exposed to human chess players and Go players and just learned it themselves.
So is there a worry analogously that your version where you can sort of โ
do a little bit of a shortcut to give the models a head start by initializing them in a certain way.
Will that make them less creative in some way?
Well, I want to hear more about that little parenthesis you just said.
I mean, I always presumed that the internal machinations of the LLMs that output a human sounding sentence were very, very different than what goes on in an actual human brain.