Sholto Douglas
๐ค SpeakerAppearances Over Time
Podcast Appearances
So I wonder what that implies in the future, like an AI firm is just like a model instead of a bunch of AI agents hooked together.
One thing you can imagine is you have an AI firm or something, and the whole thing is end-to-end trained on the signal of, did I make profits?
If that's too ambiguous, if it's an architecture firm and they're making blueprints, did my client like the blueprints?
In the middle, you can imagine agents who are salespeople and agents who are doing the designing, agents who do the editing, whatever.
Would that kind of signal work on an end-to-end system like that?
Because one of the things that happens in human firms is management considers what's happening at the larger level and gives these fine-grained signals to the pieces or something when there's a bad quarter or whatever.
But in the future, these models will be good enough to get the reward some of the time, right?
This is the nines of reliability that Sholta was talking about.
Yeah.
There's an interesting digression, by the way, on earlier we were talking about, well, we want dense representations that will be favored, right?
That's a more efficient way to communicate.
A book that Trenton recommended, The Symbolic Species, has this really interesting argument that language is not just a thing that exists, but it was also something that evolved along with our minds.
And specifically, it evolved to be both easy to learn for children and to something that helps children develop.
Right?
Like, it's...
because like a lot of the things that children learn are received through language like the languages that we the fittest are ones that help like raise the next generation right and that like makes them smarter better or whatever um and it gives them the concepts to express more complex ideas yeah yeah that and i guess um more pedantically just like not die
And so then when we just think of language as this contingent and maybe suboptimal way to represent ideas, actually...
Maybe one of the reasons that LLMs have succeeded is because language has evolved for tens of thousands of years to be this sort of caste in which young minds can develop, right?
Like that is the purpose it was evolved for.
predict the next token right it's kind of easy yeah yeah decisions made i mean there's the tokenization um like discussion and debate about like but one of gordon's favorites yeah yeah that's really interesting how much um the the case for a multi-modal being a way to bridge the data wall or get past the data wall yeah is can is like based on the idea that