Joshua Greene
👤 PersonPodcast Appearances
My name is Joshua Green, and I'm a professor in the Department of Psychology and Center for Brain Science at Harvard University, and I use he, him pronouns.
Gosh, I got here as a professor in 2006, so we're coming up on 19 years.
And then I was a wee undergrad here in the 90s as mostly a philosophy student.
Just nerd, you know, I think I was just really into it.
And I never had a sort of form of cool that went with being a philosopher.
What got you into it?
I mean, you could go way, way back asking lots and lots of questions as a kid.
I remember kind of being unsatisfied with a lot of the answers I was given in Hebrew school.
I was raised in a very sort of, you know, fairly reformed secular Jewish community.
family, you know, I always used to argue and someone was like, well, we got to put this to good use and suggested that I do debate.
So I started doing debate when I was like 12 and I was a pretty young 12 year old.
So I was like this little, little argumentative twerp with my, don't ask me why, yellow pants that I wore.
I think I didn't realize that was funky.
That was just the pants that I had.
And, you know, someone called me Mr. Banana Pants at age 12 as a debater.
That's harsh.
But I got interested in the questions.
And a lot of the questions were really about sort of fundamental social trade-offs.
You know, the rights of the individual versus the greater good kind of came up over and over again.
And this person said, you know, like in cross-examination, though it's very formal, it's like,
So do you agree that you're saying that it's better to always do the thing that will promote the greater good?
And I'm like, yes, yes, yes.
And then she said, OK, well, suppose there was a doctor who had some patients and five of these patients were missing organs of various kinds.
And then in comes a healthy person with two nice, clean, ready to go kidneys and a liver.
And you could take
the organs out of this one person and distribute them to these other five people and assume that the operation would work, would it be okay for the doctor to sacrifice that one person to save the other five patients?
And I was like, you know, and I lost that debate round.
But even worse, I kind of lost my
guiding philosophy that was always my go-to, right?
And that really stuck with me.
And this introduced me to the trolley problem, which was really the underlying sort of philosophical exploration of these sacrificial dilemmas where you can kill one person to save five people.
And what was beautiful about that was it had a really nice tight comparison, right?
So this is the now, at the time, no one outside of philosophy had heard of these things.
So the trolley, for the people who don't know, like for the eight people out there who've never heard of this,
The trolley is headed towards five people and you can hit a switch and turn it onto another track where it will run over one person.
Most people say that that's acceptable.
They get hit by the trolley, but it stops the trolley from running over the fire.
Is that okay?
I felt, and most people think that that's wrong.
And I thought, ah, this is like the perfect fruit fly.
Because here you've got the biggest divide in Western moral philosophies between the utilitarians like John Stuart Mill and Jeremy Bentham, who are saying morality is ultimately about producing good consequences.
And the kind of Kantians who say, no, morality is fundamentally about people's rights and our duties to respect those rights and certain lines that must not be crossed or must be crossed.
And in the original switch case, where you can turn the trolley away from the five and onto the one, to the extent you agree with most people that it's okay to hit the switch, that fits very well with the utilitarian perspective, or consequentialist is how philosophers often say it.
But the footbridge case, it seems like a real vindication for Kant and the Kantians that no, even sometimes when it's you can promote the greater good, even if we grant that all of this will work, etc.,
it still seems wrong, right?
And I felt that and I was like, what is going on there?
And then I got more into the psychology and ultimately into the neuroscience behind that switch footbridge distinction in our heads.
And that kind of is what turned me from being just a regular philosopher into being a philosopher slash experimental psychologist slash cognitive neuroscientist.
So there are some philosophers whose ideas really matter a lot.
And the living philosopher who's been most important to me is the philosopher Peter Singer, who kind of blew my mind back in those early days when I was thinking about these types of moral dilemmas.
Just a quick background.
So Peter Singer...
And his ideas have literally saved millions of people's lives.
So philosophy is kind of a
it's a high stakes hit or miss.
The people who make a difference make an enormous difference.
And part of why I expanded into science is I felt like if I have any shot at making a difference as a philosopher, I'm in a better position to do it as a philosopher scientist who can look at what's going on in our heads and say, this is what's happening.
And when you understand that, does that change our thinking about what's really right or wrong?
Well, so Peter Singer is known for many, a few things.
I mean, one of them is essentially being the philosophical grandfather of the animal rights movement.
So he wrote a book called Animal Liberation that came out in the 70s.
And it starts out in an interesting way.
He says, you know, people say, oh, you're writing a book about animal rights.
You must really love animals.
And he says, no, it's not about what I love or don't love or want to cuddle up with or play fetch with.
It's about whether or not animals suffer.
which was a point that Jeremy Bentham, the original utilitarian in the 18th century made.
And he sort of made the case that our practices, especially with factory farming, just can't be morally justified.
The other big thing that he did, and this is the thing that has probably had the most direct influence on my work, is his famous drowning child argument.
So you may have heard some version of this.
You're walking along and there's a pond and
and there is a child who is drowning in the pond, and you can wade in and save this child, but you're gonna ruin your fancy new shoes or suit or whatever it is, and it'll cost you some amount of money to replace them.
And if you ask people, is it okay to let the child drown because you don't wanna ruin your clothes, most people would say, no, that's terrible, that's monstrous.
Okay, and Peter Singer says, good, I agree.
And then he says, but there are children on the other side of the world who are drowning in poverty, who are badly in need of food and medicine, and for the price of the clothes that you're wearing,
You or you combined with a small number of other people can save someone's life.
Well, shit.
So if you have an obligation to wade into the pond and save the child at some expense to yourself, why don't you have a comparable obligation to save people nearby or on the other side of the world whose lives are in grave danger due to their circumstances?
And a lot of people spent a lot of time trying to argue why Singer was wrong, but I was convinced that he was right, even if it goes against the grain of human nature.
So Singer essentially made the argument that we in the affluent world should be doing much more for people
in desperate need, and typically your money goes farthest overseas, you can provide a treatment that rids a child of devastating parasitic intestinal worms for less than a dollar, right?
And for $100, you can do it 100 times.
There's effectively no limit, right?
And that argument really stuck with me and has motivated a lot of the other work.
Absolutely.
I mean, take the case of animal rights.
I mean, when Singer wrote Animal Liberation, it was just a tiny fraction of the population that was vegetarian or vegan for moral reasons, right?
Especially, you know, in the West or people didn't already weren't already part of a religious tradition, let's say that had that kind of norm.
Now there's nothing remarkable at all about meeting someone who's a vegetarian or these days even a vegan because they don't want to participate in killing animals and making them suffer.
And then the other thing is in terms of people in the affluent world using their money effectively to alleviate as much suffering as possible, which mostly means overseas, that movement really took off.
And billions have been raised very explicitly under this philosophical banner.
It's been a little complicated recently, so this is what I'm referring to as the effective altruism movement.
So all of the above, the sort of breakthrough experiment that I did while I was a philosophy PhD student, and this was done with my mentor, then Jonathan Cohen, who's still at Princeton.
I had the thought that what's going on in the footbridge case is there's a kind of an emotional response to the thought of sort of pushing this person and harming them in this very sort of direct and intentional way.
And that you could see that response in the brain.
So if you...
put people in the scanner and you have them consider dilemmas like the switch case and dilemmas like the footbridge case, you'd see more activity related to a kind of emotional response in the footbridge case.
The stronger that response, the more people would say, no, you can't push the guy off the footbridge or whatever it is.
And we found something broadly consistent with that in that first neuroimaging study, which was published in 2001.
The cool thing about brain imaging is that you can look and see what's going on, but it's a very noisy signal and you don't have experimental control, right?
You can change what people are reading or are asked to think about, but you can't turn on or off some part of the brain.
Whereas a brain lesion, that part of the brain is permanently turned off, right?
And so the actual studies or the work that made me think to make that prediction about brain imaging was work with patients like the famous case of Phineas Gage.
Listen to this.
So Phineas Gage is the 19th century railroad foreman who was working on the railroad all the live long day in Vermont and got an iron spike through the front of his head.
And as a result, was fundamentally changed.
I mean, you might think that someone who had that kind of injury
They wouldn't be able to speak.
They wouldn't be able to ever do another math problem.
His sort of rational faculties and language faculties and just general sort of thinking ability remained intact.
But his emotions and his decision making were very much damaged.
And the way researchers who studied people like this, in particular, this is reported in a book called Descartes' Error by Antonio Damasio, which I read as an undergrad, said these people, they know the words, but they don't hear the music.
They don't feel the music, that they don't have the emotional response.
When I read that book, I literally jumped up and down on my bed when I got to that passage.
I was like, this is what's going on in the footbridge case.
Is what's missing in these patients that have damage to the ventromedial prefrontal cortex, sort of this is the part of your brain above your eyes in the middle of your forehead.
But eventually, people, including Damasio's group,
tested patients like that, and it was exactly as our results predicted.
That is, the patients with this kind of brain damage were much more likely to say that it's okay to push the guy off the footbridge.
And then people studied other types of patients.
So you have patients with damage to a part of the brain called the basolateral amygdala, which is involved in goal-directed planning.
And those people will never say that it's okay, or very, very rarely say that it's okay to push the guy off the
the footbridge.
And you see similar responses in patients who have damage to a part of the brain called the hippocampus, which is involved in kind of envisioning a scenario and deciding how to act on it based on the details of what's going on.
People found that different types of drugs you can give people, you give people an anti-anxiety drug and they become more okay with the utilitarian response.
And you give people a drug that it's a depression drug, but has a sort of reverse effect early on so that it actually heightens the emotional response.
And those people are more likely to say that it's wrong.
And we've done further studies that have sort of teased apart the different circuits.
So now we have a decent kind of understanding of the basic neural circuitry involved in the sort of yes response and the no response to cases like the footbridge case.
okay, well, then what is it exactly about the footbridge case?
And years ago, we did studies that suggested there were kind of two different things going on.
One is what you might call the pushing.
So personal force.
So if you ask people, is it okay to push somebody off the footbridge?
Like 30% of people will say yes.
And then you can do a version and say, is it okay to
let's say the person is standing over a trap door and you can hit a switch that will drop them through the footbridge onto the tracks, right?
They're like, you know, in that initial study, like 60% of people said that was okay, right?
So something about pushing, and it doesn't matter if you push with your hands or push with a pole.
So it's not about the touching, it's about the pushing, right?
That's part of it.
And then the other part of it
is this distinction between harming intentionally versus as a side effect.
And this is something that goes all the way back to like this theological doctrine from St.
Thomas Aquinas.
And that's been used like in the Catholic church, for example, to distinguish between a surgical procedure.
That's an abortion versus that's designed to save the life of the mother, but then would end up terminating the fetus's life.
Like, are you trying to do the thing that's harmful?
So it's,
Basically, is it a side effect or not?
And we find that that matters also.
For example, if you're running across a narrow footbridge to get to a switch that's going to, you know, you can hit it and save the people and you're going to incidentally bump somebody off of that footbridge to their death, more people will say that's okay.
And that's a direct personal bump, but it's incidental.
The harm is a side effect.
That is the personal, actually the personal force effect, the pushing versus hitting a switch, that was found everywhere in the world.
Really?
So basically, it tells us really, what is violence in our conception, right?
A violent action is really an action that has three things.
It causes harm, that's in the background.
It is something I didn't mention before, active as opposed to passive.
Hmm.
So this would be the difference between someone, you make them go over the footbridge versus they're about to fall and you don't stop it, right?
So active, intentional, not a side effect, and fairly direct.
Like those three things, that's kind of what makes up the core of our sense of this is a violent action.
And bringing this back to Peter Singer...
Part of why, you know, we're letting people die all the time, people whose lives we could save.
It doesn't feel like an act of violence because we're not going in there and killing them.
We're allowing circumstances to kill them.
So it's passive rather than active.
It's not our intention.
We're not achieving some goal by doing this or some specific goal.
And there's no physical directness there.
So it kind of explains why things...
that can be incredibly damaging, don't set off our alarm bells.
They don't have that paradigmatic feeling of like punching somebody in the face or pushing somebody off of a footbridge.
You know, often when I tell people, like, if you push with your hands, then it seems wrong.
But if you drop somebody through a trapdoor with a switch, and people kind of laugh, right?
And that is a normative philosophical laugh.
What you're laughing at is you're thinking, that doesn't make a lot of sense, right?
And the way I sometimes put it is, if someone called you from a footbridge and was like,
Allie, there's a train and it's coming.
There are five people and I might be able to save them.
Should I do it?
I'd have to kill someone.
And then you wouldn't say, well, that depends.
Would you have to push this person with your hands or could you do it more indirectly?
Like that shouldn't matter.
Yeah.
So that's a kind of bug.
And likewise, when it comes to something that's maybe easier to defend, like, you know,
caring about the people who are immediately in front of us, like the child who's drowning in front of us, or even more so people with whom we have a personal relationship,
We can understand that in more evolutionary terms.
We evolved to be cooperative creatures.
The group that is willing to pull its fellow tribe mates out of the raging river, that group's going to survive much better.
Our moral, emotional dispositions are designed for this group teamwork, but they're not designed to save the lives of strangers on the other side of the world.
It wasn't even possible for most of human history.
And the goal from an evolutionary point of view is for you and the members of that group to survive and spread your genes, right?
It's not about making the world better in some objective sense.
So the thought is that we can understand what we react to and what we don't.
And from at least a certain detached perspective, we can say, you know, it seems like we might overreact to certain types of harms.
Like let's say physician-assisted suicide.
Mm-hmm.
where someone is in miserable shape and they're never going to recover and they're just in a lot of pain and they feel like their life has no dignity and they feel like it's time to go, right?
And interestingly, recently it was revealed that Daniel Kahneman, the father of sort of heuristics and biases and behavioral decision-making,
chose to go to Switzerland to end his life.
And I think it's not an accident that someone who studies decision-making would make that kind of choice because he understands his, what he'd call system one sort of intuitions and his system two reasoning, and is generally a system two kind of guy.
And then perhaps, I think even more importantly,
The fact that we are not moved by the suffering of people on the other side of the world in the same way that we're moved by someone who's drowning right in front of us, we should view that as under alarming.
And then of course, there are things like
racism and tribalism more generally, and speciesism, where we don't care as much about people who we think of as different from us, as not part of our us, right?
Or we don't trust them as much, or we're ready to believe lies about them much more easily, right?
And this is related to my main project these days, which is about bridging divides.
But I think understanding where our moral feelings come from can give us insight.
But to me, you can't understand the origins of all of this and the way they work and think, oh, we should just follow our intuitions.
They're always right.
It's a direct line to the moral truth.
Instead, we need to step back and think.
And maybe where that puts us is
a world in which we care more about other people, care more about other species, are willing to make certain sacrifices when necessary, but also listen to our hearts when they tell us that we're possibly doing something wrong.
But I think that self-knowledge is incredibly useful.
Well, we've created a donation platform that's supposed to help with this.
So with the trolley problem, the trolley problem feels like an impossible dilemma.
There is no happy solution to the footbridge case.
Either you're letting more people die, four people more dead than is necessary, or you're committing what feels like an act of murder.
And unless you sort of change the situation, you're stuck with that.
There is a similar kind of dilemma getting more into Peter Singer's zone, which is about where you give and how you give.
Most people, myself included, want to give from the heart.
They want to give to things that they feel personally connected to.
And if you love animals, that might mean giving to the local animal shelter.
Or if your grandmother died of breast cancer, you might want to give to a breast cancer charity.
And that makes sense, right?
And I want to support my local schools and food bank and things like this.
But the charities that actually do the most good are almost always not the ones that are closest to our heart.
And the difference between a typical charity, a typical good charity, and the charities that are most effective is enormous.
The difference between a really effective charity and a typical charity, it's like a redwood versus a shrub.
It's like a hundred times, or in some cases, like a thousand times.
Okay, like in a developed country like the US, paying to train a guide dog for someone who's blind or visually impaired and calls like $50,000.
A surgery in other parts of the world that can prevent people from going blind due to a disease called trachoma can cost less than $100.
That can be something like 500 to 1,000 times difference in what you get from your money.
Now, this is not to say that we shouldn't support and care about blind people here, but
Surely we should take advantage of that opportunity if, you know, for the cost of training one guide dog, we can prevent 500 people or 1,000 people from going blind in the first place.
So huge differences.
Yeah.
It hurts to think about.
We have the solution because this is what I do.
My wife and I, we give to local charities and things like that that just we feel a personal connection to.
And then we do things like deworming treatments and vaccinating newborns in Nigeria and things like that.
And so I said, well, why don't we just ask people, instead of saying you should be giving to more effective charities instead of what you do, why don't you do both, right?
So we started running these experiments.
And sort of the basic setup for our first experiment is in one condition, it's the typical choice.
That is, you can pick your favorite charity or this charity that's recommended by experts.
So let's say it's a
deworm the world initiative where, you know, for a dollar, you can give a kid a deworming treatment, right?
And what we found is that, you know, most people like 80% of people or more would choose their personal favorite over the expert effectiveness recommendation.
That's the control condition.
And then in the experimental condition, we give people three choices.
You can give it all to your personal favorite.
You can give it all to the deworming charity that the expert recommend or do a 50-50 split.
And what we found was that over half the people did the 50-50 split.
So more money ended up going to the highly effective charity when you gave people the option to split than when you forced them to choose.
And we did a bunch of experiments to try to understand the psychology of it.
And the gist of it is that when you give from the heart, it's not about how much you give.
The difference between giving $50 to the local animal shelter or $100, it feels more or less the same.
So if you give $50 instead of 100, then you've got another 50 bucks.
And what you can do with that 50 bucks is scratch a different itch, which is the itch to give smart and impactfully rather than, you know, only doing the thing you have the personal connection with.
So that's sort of like the hard head psychology there.
And then we said, okay, well, what if we offer people an incentive and, you know, we'll add money on top.
And unsurprisingly, people did this even more.
It was like another 75 or 55% boost when we added money on top.
And then there's the question of, okay, where's that money going to come from?
So then we said, well, what if after people have made their choices, we say, hey, would you take the money that you were going to give to that super effective charity that you had just learned about and instead put it into a matching fund for other people?
We found that a lot of people would do that.
So we're like, huh, the math seems to add up.
Let's give this a try.
So Lucius and his developer friend, Fabio Kuhn, created this website called Giving Multiplier.
And if you're home, you can Google Giving Multiplier.
You'll see it come up.
And it allows you to do what we do in this experiment.
But the gist is you pick your favorite charity, any registered 501c3 in the US.
There's a little search field there.
And you say, OK, I'm going to give to
you know, whatever it is, my local animal shelter, which I know the name of, right?
And then you say, okay, here are the 10 charities that we are supporting that are super effective.
And it's things like distributing malaria nets or other malaria treatments or the deworming charity that I mentioned.
and other things.
And then we have this cool little slider thing.
And depending on how you allocate your money between the one that you chose and the one that's from our list, we add money on top.
And we add more money on top the more you give to the highly effective charity.
And if you have a code for that, then you get a little bit higher matching funds.
And I will shamelessly say that for listeners of this podcast, if you put in ologies as your code, then you get a higher matching rate.
Oh, that's amazing.
So we launched this in 2020.
And, you know, we would have been happy if we, you know, raised a little bit better than bake sale.
But it went really well.
And long and short of it is we've been doing this for four years.
We've raised over $4 million.
Oh, my God.
And over $2 million of that has gone to super-duper effective charities.
So we're talking about just from malaria nights alone, saving dozens of people's lives.
thousands of children who've gotten deworming treatments as a result of this, hundreds of thousands of dollars in direct cash transfers that people in poor countries can use to put a better roof over their head or start a business and things like that.
And this cycle has been going now for four years and saved lots of lives and raised millions of dollars.
And it started with that little experiment and really started with kind of the inspiration from Peter Singer, the philosopher, whose ideas really matter.
I would not complain if that's not too self-serving.
Yeah, that would be great.
If you want to put money in the matching fund, that would be awesome.
Yeah.
And you can do is like make a tiny donation now, like put in like 10 bucks, try it out, see how it works.
And then when it's holiday time and you're ready to like really do your good deeds for the year, we'll get back in touch with you and we'll have some good offers.
Please.
So let's talk about what is religion?
Why does it even exist?
And the earliest things that we might recognize as religions, what they did was they often provided explanations for things that we didn't understand.
Why does this bright ball of fire go around the world once every 24 hours and things like that?
And also kind of answered questions like, well, what happens to people when they die?
And sometimes I hear voices.
Is that people talking to me from some other realm?
And it's kind of explaining the unexplainable.
And then there was this kind of transition.
And the story I'm telling here, I should say, comes from sort of brilliant analyses by researchers like Ara Norenzayan and Joe Henrich and others.
So this is not just coming from me.
there was this kind of invention of big gods, right?
And what they mean by big gods is gods that know everything about what people do and what they're really thinking and why they do it and care that humans treat each other well.
And what that combination does is it provides a kind of guarantee of cooperation because if I'm a Muslim and you're a Muslim, even if we've never met before, if we both know that we are, you know, faithful,
then I know that I can trust you and you can trust me because we both know what'll happen to us if we disappoint God in the way that we behave.
And so religion has been a way to scale up cooperation and build trust
Among strangers, right?
And this had an enormous effect on people's ability to trade with other people and exchange technology.
So it is a social technology, right?
And what religions do, it's a double-edged sword.
It makes people more cooperative within the group.
But the religion is out there competing for resources and souls or whatever it is with other groups that are either religious or not.
And so...
Religion can bind people together, but it can also divide people at the level of groups.
Now, there are some religions that have tried to move towards a more universalist perspective, the most sort of straightforwardly so being the Unitarian Universalist Movement, which to a lot of people doesn't even feel like a, quote, real religion because it doesn't have that kind of strong sort of metaphysics and ussiness of other religions, right?
So religion is, I think, fundamentally about cooperation.
within its scope, which can be either very narrow or fairly big, and in some cases, even cover the whole world.
So it's a cultural invention, and it's a set of things that influence us emotionally, all of those rituals, all of those prayers, all of those parties, all of those dances, all of that stuff
binds people together and makes them feel like a cohesive cooperative unit, but often at the cost of making other people feel more distant, right?
So for those of us who want to see a sort of maximally wide and inclusive world, then religion is both an opportunity and a challenge.
So what I would say is morality first is not like a thing in the brain.
Like on Star Trek, the next generation, I'm showing my age here, but maybe you know the kids know this.
Oh, I love that one.
Oh my gosh, it's the best.
During the pandemic, my family, we watched all seven seasons just straight through.
It was like the great, that was our religion basically.
And so you have Commander Data and he has his like ethics module that was like added to him so that he wouldn't be like his evil twin brother lore.
But morality for humans is not a module, right?
It's really our sort of whole social, emotional intelligence complex.
What you see as kind of naturally arising out of human experience is certain basic cooperative tendencies.
That's a lesson you probably learn as a toddler if you don't turn out to be a psychopath, right?
That like toddlers are pretty violent.
Like if they were eight feet tall, we'd be in trouble.
And then you learn like you're not allowed to behave that way and you internalize that, right?
So certain basic feelings about physical violence, lying, stealing, the stuff that like this group ain't gonna work unless we have certain boundaries that we've emotionally internalized.
And then a sense of who's in and who's out with varying degrees.
Who do you owe things to?
When you have food, who do you have to share it with?
Is it no one?
Is it just your immediate family?
Is it everyone in the village, right?
So it's more like language in a sense.
Like, is language innate or not?
Well, no one comes out of the womb speaking Mandarin Chinese in the way that a gazelle might come out of the womb and be able to walk pretty soon, right?
You have to be exposed to Mandarin in order to learn it.
But you can speak Mandarin to a chimp all day long and they're not going to learn it, right?
So what we have is an innate capacity to acquire...
But you don't just acquire morality like it's one thing.
You acquire different flavors, different versions of it in the same way that humans have genetic adaptations that enable us to acquire language.
But language is not innate in the sense that we pop out talking.
We have to be exposed to it.
So it's a genetic predisposition to learn.
Right.
This has been a hot topic for a while.
Okay.
And there have been different versions of this.
People first went to trolleyology when they started thinking about self-driving cars.
And then there are people who kind of said, that's never gonna happen.
When are you ever headed towards five people and then you can turn it on to one person, right?
And I think people were sort of taking the problem a little literally, but the more realistic version of this are things like this.
So you're driving along on a two lane road and there's a cyclist in front of you going, you know, pretty slow by car driving standards.
Right.
And you could maybe swerve around them, but there's traffic coming the other direction.
When is it okay to swerve?
How close can you get to that cyclist?
How much time do you feel like you have to give yourself?
How far away does that oncoming truck have to be, right?
Everyone who drives, you can drive nicely, you can drive like an aggressive jerk.
No one can avoid that question, right?
And what it really means to drive like a jerk is you are taking too many risks
especially with other people's wellbeing, but maybe also with yourself.
So autonomous driving trolley problems are not these stark choices where there are exactly two options.
So it's a more fluid kind of thing, but it's the same underlying tensions, right?
And one of the key tensions here is between the wellbeing of the individuals in the car versus those who might be outside the car.
There was this kind of now infamous episode where a Mercedes-Benz executive was asked, will these new autonomous Mercedes that you're developing...
Will they privilege the riders?
And the executive said, well, yes, because, you know, at least, you know, you can save the people in the car.
So you should save the people in the car.
Right.
But then people push back and was like, oh, so you've got these, you know, basically like bigoted cars that are going to only care about the people inside them.
And then Mercedes said, no, no, no, no, no.
That's not what we mean.
No car should ever make any value judgments at all.
which of course is actually impossible, but good PR, right?
But critically, someone who's inside the car might be much more protected than a pedestrian or a cyclist.
So cars have to deal with this stuff.
Now, it's pretty clear that we're not going to be able to solve these problems with
a hard and fast set of rules.
So like if you're giving the car, let's say, simulations and it does different things, let's say it swerves around that cyclist and it doesn't hit the cyclist, but almost does.
Is that a win or a lose?
Is that something you want to reinforce with your machine learning algorithm?
Or is that something you want to dissuade?
So there are value judgments that are made in training.
Yeah, so this is not research that I've done, but I can tell you, people have looked at lots of different
neurodivergent conditions, some of which would go under the heading of psychopathology and others not.
You mentioned psychopathy.
This is something that's been studied and what you find is that people who have diagnosed with psychopathy are more likely to say that it's okay to push the guy off the footbridge.
that they're more likely to give those sort of utilitarian response.
But we don't think it's because they care more about the greater good.
I think it's because they don't have that emotional response that goes, ah, don't push, you know, don't hurt people, don't commit acts of violence, right?
As a parallel to that, this is work that was done as an undergrad thesis done by a student named Xin Shang, who was interested in this stuff.
And she thought, hmm, I wonder how Buddhist monks would respond.
Because there were sort of some teachings that would suggest that Buddhist monks might actually kind of make more of a utilitarian response.
But if you ask most people, would a Buddhist monk push someone off the footbridge?
I said, no, of course not.
Buddhist monks are very pious, good people, right?
So she went to the mountain city of Lhasa and interviewed, I think, 48 Buddhist monks.
And something over 80% of them said that it would be okay to push the guy off the footbridge.
And when you ask them why, many of them cited this sutra, this teaching, which describes a sort of advanced, sort of almost enlightened being who...
was in a situation where there was a murderer who was going to cause a lot of harm.
And the only way to stop them was to kill them.
And the person killed them, not out of malice or hatred, but to prevent this harm.
And actually with the expectation that it would be karmically harmful for himself.
But because he did it with that pure intention of promoting the greater good, then that was sort of part of his path to being a bodhisattva and enlightened being on earth.
And that's what we found with Buddhist monks.
So you've got psychopaths and Buddhist monks, both...
giving the utilitarian judgment.
And what that tells you is that the same response can be given for very different reasons.
For the psychopaths, it's because they didn't have the emotional voice in their head saying, don't do that.
And for the Buddhist monks, they have that voice, but they can also cultivate a kind of compassion.
They say, but what about the other five people?
And I hear both voices, but at least you're saving more lives.
And so they would say it's acceptable.
They did say about this sutra that this sutra is like not for the little kids.
Like you don't want people going around thinking that they can commit murder if they think it's for the greater good.
So you need to kind of have those guardrails.
Other people have studied other types of conditions.
So there's a condition called alexithymia, which involves people not having good access to their own emotional states.
And those people are more likely to give a utilitarian response.
I mentioned earlier patients with different types of brain damage.
So patients with damage to the basolateral amygdala or the hippocampus are more likely to say that it's wrong to push the guy off the footbridge, et cetera.
I will say anecdotally that a lot of people who take a strong utilitarian stance
in their own lives, that there's a higher incidence of people who are autism spectrum.
And some people have sort of posthumously diagnosed Jeremy Bentham, the founder of utilitarianism, as having autism spectrum disorder or being neurodivergent in that way, as you might say.
And the thought is that
The utilitarian calculation is available to reasoning.
One notable thing about Bentham, he was one of the first people in the Western philosophical tradition to advocate for what we now call gay rights.
He wrote a paper in the late 18th century, so late 1700s,
arguing that, you know, from the principle of utility, from the idea that, like, is this actually causing any harm, that maybe there's nothing wrong with two men, you know, having a sexual relationship.
And that was...
like insane, you know, culturally at his time, right?
But he applied his principle in an impartial way.
And in my view, sort of jumped ahead two centuries in moral thinking.
Let's suppose he was neurodivergent.
I would imagine that that would contribute to his capacity to see that because if you're very tuned into the social world and what other people will think of you,
you might be less willing to reach that conclusion and put pen to paper with that.
Whereas if you're a bit sort of socially detached, but you've got a good thinking reasoning brain, then you might get there.
So it's a nice case where at least it's possible that his neurodivergence, if that's in fact what he had, was a kind of philosophical strength.
Well, so, I mean, there's no agreement about this.
Yeah, yeah, I know, but... I don't like the word utilitarian.
I prefer to call myself a deep pragmatist, which I think better captures sort of my philosophical orientation.
But no, this is highly controversial.
And in fact, part of the reason why I and other people have spent so much time on these trolley dilemmas is because these are objections to utilitarianism.
That I kind of had the thought, yeah, great or good, that makes sense.
But then we had the debater asking me, is it okay to kill one person and give their organs to five other people?
Those are very salient objections.
And I wanted to understand the objections.
A kind of unfortunate side effect of all of this is that I was studying the Footbridge case
Because it's utilitarianism at its least appealing, right?
And then this work really took off.
And then people started associating utilitarianism with pushing people off of footbridges, which is really kind of like addressing the most salient objections to it, but not the heart of it, right?
So I want people to associate utilitarianism with providing opportunities in healthcare to poor people around the world and making sure that we're not torturing animals unnecessarily and things like that.
To me and to Peter Singer, that's what it's really about.
But so much focus...
partly due to me, has been on these kind of horror cases, but it was done out of a kind of intellectual integrity.
That is, if this is going to be your philosophy, you need to defend it at its least appealing.
And so that's how we got there, right?
So an unfortunate sort of cultural narrative side effect of doing all of this, but I hope at least your listeners will understand that it really is about making the world better and not about shoving people in front of speeding trolleys.
I've played.
In fact, we have a copy of it in my lab where I'm sitting.
Yeah, we've done this at lab parties and other things.
It's better than you'd think.
Yeah.
Yeah, it's pretty fun.
Oh, I thought it was great.
They did a brilliant, amazing job with, you know, just demonstrate.
And I actually did a little bit of consulting for them for later season, but not for the original Trolley episode.
No, I love the way that they dramatized it.
Although it's interesting, and this is sort of part of the pop culture meme of Trolley is really just the Switch case.
And it's often just kind of a platform for like, in the board game, like, it's just, do you care more about this or more about that?
Would you...
Kill your dog in order to save 10 people.
What about Hitler?
But as a cognitive scientist and neuroscientist, the really interesting thing is the contrast between the switch case and the footbridge case.
but they kind of managed to make the switch case very footbridgey by kind of really sort of dramatizing the horror of running somebody over.
And then my other favorite thing about that is like, there's a movie theater in the background on the street.
And the movie that's showing is called bend it like Bentham, which I thought was, was absolutely brilliant.
So people have looked at this with the trolley stuff.
And my recollection is that there's a trend where people who are more politically conservative are more likely to say that it's wrong to push the guy off the footbridge, et cetera.
But it's not a very strong case.
I think the much more salient thing is that there's much more here that we have in common that divide us.
That...
Your typical conservative and your typical liberal are going to be feeling the same internal tension about these things, thinking it's better for five people to be alive and one dead than the reverse.
And it sure does feel wrong to push somebody off of a footbridge.
Like that's universal, right?
And then there's like a little tendency of a trend.
And maybe it's because people who are more conservative or more religious, they're more likely to trust their intuitions and less likely to kind of question it.
Although I do love the story about the monks.
Yeah.
So this is the big one.
And we've got the paper we've been working out for five years that's just coming out in Nature Human Behavior.
We're very excited about this and really sort of building this stuff out.
Okay.
So we got to back up a little bit.
You mentioned my book, Moral Tribes.
That came out of it 10 years ago.
This was me trying to put together all the philosophy that I've been thinking about and all the science.
It was a successful book, but it didn't spark a global philosophical revolution.
Bummer.
I was like, well, what?
Maybe I had the wrong theory of change.
I started asking myself, instead of trying to
unite the world's tribes or reduce tensions between different peoples by getting everybody to agree on a philosophical outlook, maybe we can work on people's thoughts and feelings in a more direct kind of way.
On the biological side, everything around us is ultimately about mutually beneficial cooperation.
I mean like molecules come together to form cells.
Cells come together to form colonies and multicellular organisms.
And organisms have organs that cooperate.
And individuals cooperate with each other in small groups, in tribes, in chiefdoms, in nations, occasionally in United Nations.
Every level, from molecules up to nations in the UN, it's about parts coming together that can accomplish more together than they can accomplish separately, and that's why they work.
Now, it's not all like unicorns and rainbows, that the units are competing with each other at each level.
So it's cooperation and competition at increasing levels of complexity.
That, in a nutshell, is the story of life on Earth.
And so, from a biological point of view, the way to bring people together is to have them be on the same team.
Social scientists, and I'm partly a social scientist in addition to being a philosopher, reached the same conclusion.
And this goes back to Gordon Alport, who sat in the building that I sit in now in the 1950s and developed what's called the contact hypothesis, which is the idea that if you want to break down barriers between races, between people of different religions, they need to
be in touch with each other, and it needs to be in a cooperative sort of way.
That's not exactly how he put it.
His contemporaries, Sharif and Sharif, explicitly talked about kids at a summer camp who were either put on separate teams and made to fight with each other, or they had to team up to pull the truck out of the mud, and came to the same conclusion there, sort of compatible with this idea.
So I was like, okay, if the biologists and the social scientists have all known that cooperation, teamwork, like this is the heart of it, why haven't we solved this yet?
And there have been lots of
historical cases where this point has been made.
So when people were first, in World War II, were first talking about integrating the US military, a lot of people, you will never get white people and black people to fight in the same unit.
And they had people making these dire predictions of like people turning on each other in the ranks, right?
And instead what they found was that these integrated units worked great.
And it really changed people's racial attitudes.
Because when you put people on the same team and their lives are at stake and there's a job to get done, they not only do the job, but they become like brothers, right?
Or sisters, as the case may be.
and in sports and things like that.
And in some sense, every modern city where people from different religions and backgrounds and races come together and work together is testament to this idea that people's attitudes shift through working together, through cooperation, either sort of tacitly or very directly, like in the same job.
So it's like, okay, so why haven't we solved this in some more systematic way?
So we got to thinking like,
What do you have to do?
Well, you need something that works and we think mutually beneficial cooperation, teamwork is the key.
It's gotta be done in a way that's scalable.
And today that really means digital, right?
And it needs to be something that people are motivated to do.
which you can think of as fun, right?
And we said, okay, to me, the center of that Venn diagram is a quiz game.
And so we created this quiz game.
Our first work was on Republicans and Democrats, where we would say, okay, we're going to pair up a Republican and Democrat together
We have them answer these quiz questions and they are connected by chat and they're in the same boat.
They have the same score.
They both win money together or lose money together, depending on their answers.
They have to agree on an answer and submit it.
Right.
And they get the money if they're right and they lose it if they're wrong.
And if they give different answers, then they're automatically wrong.
And so we wanted questions that would really promote teamwork.
So we had sort of two types of questions like this.
So one are kind of cultural questions where one side is more likely to know the answer than the other, right?
So I don't know.
I'll try you, Allie.
What's the name of the family on the show Duck Dynasty?
Do you know?
Yeah.
Yeah.
It's Robertson.
Not the Duggars.
Yeah, sorry.
I didn't mean to jump the gun there.
But a lot of Republicans, they're more likely to know the answer to that question.
We sort of figured out what are the foods in movies and TV shows that Republicans are more likely to watch or know about and liberals, right?
Questions like that.
And then we have questions that are kind of more political in nature.
You ask about things like rates of crime among immigrants.
Conservatives are more likely to think that it's very high.
Democrats are more likely to think, or liberals are more likely to think it's low.
And in that case, the liberals are right.
And so we have these questions where everybody gets to be right and everybody gets to be wrong, and not everybody answers always in keeping with their stereotypes.
But on average, people play.
They have the experience of, I know some things, and there are some things I don't know, and my partner knows some things that I don't know.
They report having higher levels of respect for the other side.
They say both liberals and conservatives can make valid points, more open to leaders who support political compromise.
And some of these effects...
We test people the next day, the next week, the next month, four months later.
Some of these effects we see lasting four months from playing the game once.
And in all the ones we've done so far, we find that when people do this even for 20 minutes, first of all, people enjoy it and we see positive effects at least immediately.
And we're starting to do work with employees at businesses.
We're starting to do work with Jews and Arabs in Israel.
We're building a game for Hindu and Muslims in India or people of Catholic and Protestant descent in Northern Ireland.
Wherever there's an us versus them kind of conflict or tension,
put people on the same team and let them have that cooperative experience.
And we are now working on bringing this out into the world.
So if you go to letstango.org, you can sign up, you can put your email in there and we'll let you know when there's a game that you can join.
And I hope we'll get to the point where we can have games going on all the time.
You know, the research suggests that this is a way to bring people together and there's no limit.
I hope that a year or two from now we'll have had millions of people have this positive experience of cooperating with people with whom they disagree politically.
Also, anyone who wants to throw money at this, please let me know.
We need to build this out.
I think it would be a huge change in our politics.
There will be people who have conservative worldviews and liberal worldviews and people who are religious and people who are not.
That's always going to be there.
But what doesn't have to be there is the sense that
those other people are untrustworthy and unworthy of respect and that we can't share power with them because of the terrible things they'll do if they're given the chance, right?
And it's that conspiratorial sense that they're an untrustworthy other undermining me and my people and what this country is supposed to be.
That's what's so toxic.
And we can have our disagreements about policy issues, deep, important, powerful disagreements, but not have that
And that's what I hope that the game, when it gets out there, will do.
Strangely, not that much changed after January 6th or January 20th of this year.
We've been a divided country for a while.
I mean, this is not new, this sense that we have different conceptions of who counts as us.
So what's happened in recent years is different.
And you do see it in the numbers, like polarization and distrust, it's up.
And the good news is that the game, it's not an overnight fix, but it moves the needle.
People play the game once, we see effects four months later.
And there's no reason why you can't play it lots of times.
Impatience, you know?
Okay.
I feel like...
It just takes so long to learn things and to solve problems.
Academics are great at slowing things down.
I mean, we have internal review boards.
We might have the answers, not me personally, but collectively, we might have the answers.
And what if we don't get there in time?
What if the complete collapse of American democracy or the nuclear war or whatever it is like comes before we figure out not just the basic science, but I said, it's an engineering problem.
Like we have to figure out
how to build these things.
I feel like to me, it's just, gosh, how can we solve these problems as fast as possible?
That's what kind of just drives me nuts.
I mean, it's just an unbelievable privilege to work with the people that I work with.
The paper that we're about to publish, this is with a graduate student named Lucas Woodley, who's next door, who is just a joy working with him.
And Evan DeFilippis was the grad student who started this work.
He's amazing.
And Shankar Ravi is our tech guy on this, and he's been great.
So working with these people and the people of the Global Development Incubator
Andrew Stern is the head of it, and Therese Semple Smith has been working on this for a long time.
I don't want to go down the whole list, but like, it's great.
And also just like the privilege of being able to spend my life, like what I get to do for my job is to try to, as a scientist, sort of unlock the mysteries of the mind.
And then as a kind of applied scientist, try to bring those lessons out into the world.
I mean, I just feel so lucky that I get to do this.
And I just like, I just don't want to miss any, any opportunity, you know?
So yeah, so I'm just incredibly lucky that way.
So if people are interested in this stuff, if you want to give in a way that you support your favorite charities, but also can have like huge impact, then go to Giving Multiplier and use Ologies as your code and you get extra matching funds.
And if you want to participate in...
helping unite the United States and sort of foster mutual trust and respect, then please go to letstango.org.
You can sign up to play games and try to bring along people who are different from you so that we get all of America playing this game.
That's perfect.
That's perfect.
Thank you.
Thank you.
This is such a joy.