Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Joscha Bach

๐Ÿ‘ค Speaker
1434 total appearances

Appearances Over Time

Podcast Appearances

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And we use reasoning strategies that use some axiomatic consistency by which we can identify those strategies and thoughts and sub-universes that are viable and that can expand our thinking. So if you look at the language models, they have clear limitations right now. One of them is they're not coupled to the world in real time in the way in which our nervous systems are.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So it's difficult for them to observe themselves in the universe and to observe what kind of universe they're in. Second, they don't do real-time learnings. They basically get only trained with algorithms that rely on the data being available in batches. So it can be parallelized and runs efficiently on the network and so on. And real-time learning would be very slow so far and inefficient.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So it's difficult for them to observe themselves in the universe and to observe what kind of universe they're in. Second, they don't do real-time learnings. They basically get only trained with algorithms that rely on the data being available in batches. So it can be parallelized and runs efficiently on the network and so on. And real-time learning would be very slow so far and inefficient.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So it's difficult for them to observe themselves in the universe and to observe what kind of universe they're in. Second, they don't do real-time learnings. They basically get only trained with algorithms that rely on the data being available in batches. So it can be parallelized and runs efficiently on the network and so on. And real-time learning would be very slow so far and inefficient.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

That's clearly something that our nervous systems can do to some degree. And there is a problem with these models being coherent. And I suspect that all these problems are solvable without a technological revolution. We don't need fundamentally new algorithms to change that.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

That's clearly something that our nervous systems can do to some degree. And there is a problem with these models being coherent. And I suspect that all these problems are solvable without a technological revolution. We don't need fundamentally new algorithms to change that.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

That's clearly something that our nervous systems can do to some degree. And there is a problem with these models being coherent. And I suspect that all these problems are solvable without a technological revolution. We don't need fundamentally new algorithms to change that.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For instance, you can enlarge the context window and thereby basically create working memory in which you train everything that happens during the day. And if that is not sufficient, you add a database and you write some clever mechanisms that the system learns to use to swap out in and out stuff from its prompt context.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For instance, you can enlarge the context window and thereby basically create working memory in which you train everything that happens during the day. And if that is not sufficient, you add a database and you write some clever mechanisms that the system learns to use to swap out in and out stuff from its prompt context.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

For instance, you can enlarge the context window and thereby basically create working memory in which you train everything that happens during the day. And if that is not sufficient, you add a database and you write some clever mechanisms that the system learns to use to swap out in and out stuff from its prompt context.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And if that is not sufficient, if your database is full in the evening, overnight you just train. The system is going to sleep and dream and is going to train the stuff from its database into the Lauter model by fine-tuning it, building additional layers and so on. And then the next day it starts with a fresh database in the morning with fresh ice and has integrated all this stuff.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And if that is not sufficient, if your database is full in the evening, overnight you just train. The system is going to sleep and dream and is going to train the stuff from its database into the Lauter model by fine-tuning it, building additional layers and so on. And then the next day it starts with a fresh database in the morning with fresh ice and has integrated all this stuff.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

And if that is not sufficient, if your database is full in the evening, overnight you just train. The system is going to sleep and dream and is going to train the stuff from its database into the Lauter model by fine-tuning it, building additional layers and so on. And then the next day it starts with a fresh database in the morning with fresh ice and has integrated all this stuff.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

You know, when you talk to people and you have strong disagreements about something, which means that in their mind they have a faulty belief or you have a faulty belief with a lot of dependencies on it, very often you will not achieve agreement in one session. But you need to sleep about this once or multiple times before you have integrated all these necessary changes in your mind.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

You know, when you talk to people and you have strong disagreements about something, which means that in their mind they have a faulty belief or you have a faulty belief with a lot of dependencies on it, very often you will not achieve agreement in one session. But you need to sleep about this once or multiple times before you have integrated all these necessary changes in your mind.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

You know, when you talk to people and you have strong disagreements about something, which means that in their mind they have a faulty belief or you have a faulty belief with a lot of dependencies on it, very often you will not achieve agreement in one session. But you need to sleep about this once or multiple times before you have integrated all these necessary changes in your mind.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So maybe it's already somewhat similar.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So maybe it's already somewhat similar.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

So maybe it's already somewhat similar.

Lex Fridman Podcast
#392 โ€“ Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Yeah. And of course, we can combine the language model with models that get coupled to reality in real time and can build multimodal model and bridge between vision models and language models and so on. So there is no reason to believe that the language models will necessarily run into some problem that will prevent them from becoming generally intelligent. But I don't know that.