Jamie Bartlett
๐ค SpeakerAppearances Over Time
Podcast Appearances
So twinned to that, these machines also want to keep you happy.
They want to keep you on there.
They want to keep you coming back for more because they're profit-making companies.
When they build these models, they then have often hundreds or thousands of humans who rate answers before they're released into the wild.
And humans do tend to like answers that flatter us, that agree with us.
So they really want to please you all the time.
So if you say, I need a piece of research that shows that exercising five times a day, and if it can't find it in its next word probability system, it'll often just come up with a likely series of words that it also knows will keep you happy.
Yes, I think that is.
Okay, so these massive data centers, people are going on to chat GPT and asking sort of the world's biggest, most energy-hungry data centers, what is the capital of France?
And it does this incredibly complex statistical analysis to work out the next most likely word based on the one trillion words that it has scooped up.
And it's such a waste of energy.
The example of the therapy bot is exactly that.
We are using these colossal, single, multi-purpose models for lots of very, very specific tasks where we do need far more specialized models.
Some people call them small language models.
This is where it gets a bit complicated.
They'd be built on top of the big ones because the big ones, the sort of frontier models, the Claudes and the ChatGPTs and Lama from Meta, are the ones that have learned the rules of basic language, which is why they're able to so fluently communicate with us.
But on top of that, you can sort of fork them or create fine-tuned versions of them that have very particular rules to follow as well and new data sets they're trained on.
Like a lot of the world's best academic research when it comes to therapy is behind paywalls, in books that these models haven't seen.