Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Steven Byrnes

👤 Speaker
266 total appearances

Appearances Over Time

Podcast Appearances

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

In this post, I'm trying to put forward a narrow, pedagogical point, one that comes up mainly when I'm arguing in favor of LLMs having limitations that human learning does not.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

For example here, here, here.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

See the bottom of the post for a list of subtexts that you should not read into this post, including, therefore LLMs are dumb, or, therefore LLMs can't possibly scale to superintelligence.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Heading.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Some intuitions on how to think about real, continual learning.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Consider an algorithm for training a reinforcement learning, RL, agent, like the Atari Playing Deep Q Network, 2013, or AlphaZero, 2017, or think of within-lifetime learning in the human brain, which, I claim, is in the general class of model-based reinforcement learning, broadly construed.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

These are all real-deal, full-fledged learning algorithms.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

There's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters aka weights in the model such that its actions and or predictions will be better in the future.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

And indeed, the longer you run them, the more competent they get.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

When we think of continual learning, I suggest that those are good central examples to keep in mind.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Here are some aspects to note.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Knowledge versus information.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

These systems allow for continual acquisition of knowledge, not just information.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

This continual learning can install wholly new ways of conceptualizing and navigating the world, not just keeping track of what's going on.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Huge capacity for open-ended learning.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

These examples all have huge capacity for continual learning, indeed enough that they can start from random initialization and continually learn all the way to expert-level competence.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Likewise, new continual learning can build on previous continual learning in an ever-growing tower.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

ability to figure things out that aren't already on display in the environment.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

For example, an Atari-playing RL agent will get better and better at playing an Atari game, even without having any expert examples to copy.

LessWrong (Curated & Popular)
"You can’t imitation-learn how to continual-learn" by Steven Byrnes

Likewise, billions of humans over thousands of years invented language, math, science, and a whole $100T global economy from scratch, all by ourselves, without angels dropping new training data from the heavens.

← Previous Page 1 of 14 Next →