Steven Byrnes
👤 SpeakerAppearances Over Time
Podcast Appearances
In this post, I'm trying to put forward a narrow, pedagogical point, one that comes up mainly when I'm arguing in favor of LLMs having limitations that human learning does not.
For example here, here, here.
See the bottom of the post for a list of subtexts that you should not read into this post, including, therefore LLMs are dumb, or, therefore LLMs can't possibly scale to superintelligence.
Heading.
Some intuitions on how to think about real, continual learning.
Consider an algorithm for training a reinforcement learning, RL, agent, like the Atari Playing Deep Q Network, 2013, or AlphaZero, 2017, or think of within-lifetime learning in the human brain, which, I claim, is in the general class of model-based reinforcement learning, broadly construed.
These are all real-deal, full-fledged learning algorithms.
There's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters aka weights in the model such that its actions and or predictions will be better in the future.
And indeed, the longer you run them, the more competent they get.
When we think of continual learning, I suggest that those are good central examples to keep in mind.
Here are some aspects to note.
Knowledge versus information.
These systems allow for continual acquisition of knowledge, not just information.
This continual learning can install wholly new ways of conceptualizing and navigating the world, not just keeping track of what's going on.
Huge capacity for open-ended learning.
These examples all have huge capacity for continual learning, indeed enough that they can start from random initialization and continually learn all the way to expert-level competence.
Likewise, new continual learning can build on previous continual learning in an ever-growing tower.
ability to figure things out that aren't already on display in the environment.
For example, an Atari-playing RL agent will get better and better at playing an Atari game, even without having any expert examples to copy.
Likewise, billions of humans over thousands of years invented language, math, science, and a whole $100T global economy from scratch, all by ourselves, without angels dropping new training data from the heavens.