Sean Carroll
👤 SpeakerAppearances Over Time
Podcast Appearances
We want to get the right answer when we add numbers together.
We want to logic our way through puzzles that we are given.
We want to reach rational, reasonable conclusions.
So how do you sort of fit together, on the one hand, the pristine rules of logic and reasoning to which we aspire as thinking reasonable creatures, and on the other hand, the reality of our minds and our brains and our embodied intelligence?
which has, number one, a whole bunch of different things that it was selected for over the course of biological time, and number two, all sorts of constraints in terms of energy and fuel and time and things like that.
If you had a brain, a human kind of brain, that was able to do arbitrarily good arithmetic, that might make it worse at other things that were more important for survival.
So the idea—this is a very hard problem for cognitive scientists—the idea of coming up with the laws of thought, the laws that an ideally rational creature working under certain constraints would follow—
Right.
Right.
Right.
So the quest for these laws of thought is what we're talking about today with Tom Griffiths, who's a cognitive scientist.
He has a new book coming out called, guess what, The Laws of Thought, The Quest for a Mathematical Theory of the Mind.
As I learned in the conversation, he has another book coming out also right now, which is sort of the technical version of this in some sense, or at least a companion to it.
So The Laws of Thought is meant for everybody.
The other book is with co-authors Falk Leder and Frederick Calloway, and it's called The Rational Use of Cognitive Resources, A New Approach to Understanding Irrational Behavior Modeling Human Cognition.
Part of what you learn by thinking about these things is that certain ways in which human beings act irrationally –
if you compare them to the perfect laws of logic that you might aspire to, actually have good reasons for them, right?
There are reasons why we have certain inclinations, certain biases and so forth.
What does all that mean for the world of programming artificial intelligence?
Should we try to make the AIs think just like human beings?