Earlier this week, Chris Urmson, chief of Google’s self-driving cars project, made a pretty big mistake for someone so high up at Google: he dismissed philosophers and the trolley problem as irrelevant to self-driving cars.
Now, people dismissing philosophers as irrelevant isn’t terribly unusual (see: Pinker, deGrasse Tyson, Dawkins, et al). But it is a bit unusual to see Google make such a novice mistake, especially a representative so highly (and publicly) placed in the company. Speaking at Volpe, National Transportation Systems Center in Cambridge, Massachusetts, Urmson said:
It’s a fun problem for philosophers to think about, but in real time, humans don’t do that. There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.
It’s pretty rare to see someone so clearly make a mistake like this, but I’m not one to make light of gift horses. So yes, technically, Urmson is right: no one (not even those useless philosophers) think that your mother is carefully evaluating all options when she slams on her brakes and throws her arm across your chest, pinning you to your seat, even though you’re 32 years old. No one actually thinks a driver is thinking “hmm, that moose is headed right for me, but there are a bunch of kids waiting for the bus stop where I’d normally veer, so should I go ahead and hit that moose and avoid the kids, or should I swerve towards the kids and hope I can miss them all? and then weighing out costs, risks, benefits, and so on. That’s just silly.
Furthermore, as Ernesto Guevara noted on Twitter, while Urmson is quick to dismiss philosophy and ethical decision-making, he goes on to discuss the engineered decisions on who a Google car will try to avoid hitting: vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly things that don’t move. Hmm. A hierarchy of decisions about who a car will or will not hit… sounds an awful lot like moral decision-making to me.
Which brings us to what the trolley problem is: it’s an acknowledgment of the fact that people make these decisions, it’s not an example of how people make these decisions. By trying to figure out how people react to choosing to hit one person vs five, or pushing a person to their death vs pulling a lever1, and other permutations of the thought experiment,2 philosophers are trying to detangle the instantanious “some kind of reaction” that Urmson just vaguely dismisses.
Yes, drivers just react in the moment, but those reactions a driver makes—and how we think about them when faced with thought experiments—are reflections of values, culture, society. Trying to discern the nuances of those values and how we get to them is at the heart of the trolley problem, and in the case of those who are building self-driving cars, it seems like it would be a pretty good idea to understand why a mother might throw her arm over her child and swerve to the right (in order to protect her child and take the impact of an oncoming car herself) vs a teenager swerving to avoid hitting a dog that darts into the street but ends up hitting or endangering people playing on the sidewalk, instead.
That Urmson thinks philosophers believe people make these kinds of decisions on the fly, and that’s what the trolley problem is about, highlights his lack of familiarity with the idea of not just the trolley problem, but applied philosophy—which should worry anyone who is watching not only the development of self-driving cars, but Google’s continued branching out into all aspects of our increasingly wired lives.
Is the trolley problem always about life and death decisions?
What I recall about those few instances of protective out-flung arm move is that I was simultaneously breaking, while only my left hand was on the wheel. Had I needed to swerve — I don’t remember if I did ”” swerving to my right would have been more difficult and dependent upon the position of my hand on the wheel. I do know I was reacting and not thinking through a set of options.
In Neal Stephenson’s newest novel ”” SevenEves ”” he writes about a number of choices people make and the values that underlie them. I recommend it and would love to talk with you about one set in particular, involving child bearing in his speculative fiction narrative, where he seems to have overlooked a bunch of risk factors.
In general the trolley problem is set up to be gruesome death scenarios—”the fat man” is always a fun one to run through undergraduates*—but there are other thought experiments that are somewhat overplayed that hope to tease out other choices. For example, most people think that kidnapping and chloroforming a woman then hooking her up to a famous violinist who needs another person to function as living dialysis is a bad no good thing to do to the woman, and she should be able to unhook herself if she doesn’t want to support and sustain that famous violinist for any period of time. Yet we find that many folks will say that it’s wrong to unhook yourself identify as pro-choice—they place emphasis on the fact that the violinist is a sentient, self-aware being—and other people who identify as anti-choice will say the woman should unhook herself as a matter of personal liberty, even though the situation is analogous to abortion.
You might have to sweet talk me to get me to voluntarily read any more Stephenson (I’ve dented more than one wall throwing his books in frustration), but I’m always up for a chat—you know my (gmail) address! 🙂
*The Fat Man generally starts out by saying that instead of being able to flip a lever when you’re next to the tracks, you’re above the trolley tracks. The first version stays the same: you can push a lever to move the trolley away from the five victims towards the single, often infant, victim. The second version changes things: you have a single track that you’re above, and a fat man is next to you. The trolley is racing along towards a person, often a baby. The only way to stop the trolley is to put something really big and heavy in front of it. There’s a lever next to you that will open a trap door and drop the fat man in front of the trolley, killing him but saving the baby. Do you push the lever? The next scenario removes the lever but suggests pushing the fat man off the overpass and on to the tracks. Etc and so forth.
Did you meant to say “pro-choice” in your violinist dialysis example above? I’m confused if so.
I did mean pro-choice, but I poorly worded the sentence. I’ve rephrased things to hopefully make it clearer. (We do find that pro-choice people can be anti-disconnecting from the violinist because of the agency and full human being-ness of the violinist, and we find that anti-choice people often place more emphasis on the personal rights violation of the woman who was kidnapped, and too bad so sad for the violinist.)
Comments are closed.