Dr. Terry Sejnowski
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's true. Yeah, it's true. By the way, there are two kinds of dreams. Very interesting. So if you wake someone up during REM sleep, you get very vivid dreams. Changing. Dreams, they're always different and changing. But if you wake someone up during slow-wave sleep, you often get a dream report, but it's a kind of dream that keeps repeating over and over again every night.
And it's a very heavy emotional content.
And it's a very heavy emotional content.
And it's a very heavy emotional content.
Yeah.
Yeah.
Yeah.
Yeah, probably slow-wave sleep, yeah.
Yeah, probably slow-wave sleep, yeah.
Yeah, probably slow-wave sleep, yeah.
Well, so the NIH has something called the Pioneer Award. And what they're looking for are big ideas. that could have a huge impact, right? So I put one in recently and here's the title is a temporal context in brains and transformers.
Well, so the NIH has something called the Pioneer Award. And what they're looking for are big ideas. that could have a huge impact, right? So I put one in recently and here's the title is a temporal context in brains and transformers.
Well, so the NIH has something called the Pioneer Award. And what they're looking for are big ideas. that could have a huge impact, right? So I put one in recently and here's the title is a temporal context in brains and transformers.
AI, right? The key to chat GTP is the fact there's this new architecture, it's a deep learning architecture, feed forward network, but it's called a transformer. And it has certain parts in it that are unique. There's one called self-attention. And it's a way of doing what is called temporal context. What it does is it connects words that are far apart.
AI, right? The key to chat GTP is the fact there's this new architecture, it's a deep learning architecture, feed forward network, but it's called a transformer. And it has certain parts in it that are unique. There's one called self-attention. And it's a way of doing what is called temporal context. What it does is it connects words that are far apart.
AI, right? The key to chat GTP is the fact there's this new architecture, it's a deep learning architecture, feed forward network, but it's called a transformer. And it has certain parts in it that are unique. There's one called self-attention. And it's a way of doing what is called temporal context. What it does is it connects words that are far apart.
You give it a sequence of words and it can tell you the association. Like if I use the word this, And then you have to figure out in the last sentence what it did refer to. Well, there's three or four nouns it could have referred to. But from context, you can figure out which one it does. And you can learn that association.
You give it a sequence of words and it can tell you the association. Like if I use the word this, And then you have to figure out in the last sentence what it did refer to. Well, there's three or four nouns it could have referred to. But from context, you can figure out which one it does. And you can learn that association.
You give it a sequence of words and it can tell you the association. Like if I use the word this, And then you have to figure out in the last sentence what it did refer to. Well, there's three or four nouns it could have referred to. But from context, you can figure out which one it does. And you can learn that association.
Yes. I think that that's an example, but it turns out that every word is ambiguous. It has like three, four meanings. And so you have to figure that out from context. And so in other words, there are words that live together. and that come up often. And you can learn that from just by predicting the next word in a sentence. That's how a transformer is trained.