Google demonstrates people dismiss philosophy when they don’t understand it
Earlier this week, Chris Urmson, chief of Google’s self-driving cars project, made a pretty big mistake for someone so high up at Google: he dismissed philosophers and the trolley problem as irrelevant to self-driving cars.
Now, people dismissing philosophers as irrelevant isn’t terribly unusual (see: Pinker, deGrasse Tyson, Dawkins, et al). But it is a bit unusual to see Google make such a novice mistake, especially a representative so highly (and publicly) placed in the company. Speaking at Volpe, National Transportation Systems Center in Cambridge, Massachusetts, Urmson said:
It’s a fun problem for philosophers to think about, but in real time, humans don’t do that. There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.
It’s pretty rare to see someone so clearly make a mistake like this, but I’m not one to make light of gift horses. So yes, technically, Urmson is right: no one (not even those useless philosophers) think that your mother is carefully evaluating all options when she slams on her brakes and throws her arm across your chest, pinning you to your seat, even though you’re 32 years old. No one actually thinks a driver is thinking “hmm, that moose is headed right for me, but there are a bunch of kids waiting for the bus stop where I’d normally veer, so should I go ahead and hit that moose and avoid the kids, or should I swerve towards the kids and hope I can miss them all? and then weighing out costs, risks, benefits, and so on. That’s just silly.
Furthermore, as Ernesto Guevara noted on Twitter, while Urmson is quick to dismiss philosophy and ethical decision-making, he goes on to discuss the engineered decisions on who a Google car will try to avoid hitting: vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly things that don’t move. Hmm. A hierarchy of decisions about who a car will or will not hit… sounds an awful lot like moral decision-making to me.
Which brings us to what the trolley problem is: it’s an acknowledgment of the fact that people make these decisions, it’s not an example of how people make these decisions. By trying to figure out how people react to choosing to hit one person vs five, or pushing a person to their death vs pulling a lever1, and other permutations of the thought experiment,2 philosophers are trying to detangle the instantanious “some kind of reaction” that Urmson just vaguely dismisses.
Yes, drivers just react in the moment, but those reactions a driver makes—and how we think about them when faced with thought experiments—are reflections of values, culture, society. Trying to discern the nuances of those values and how we get to them is at the heart of the trolley problem, and in the case of those who are building self-driving cars, it seems like it would be a pretty good idea to understand why a mother might throw her arm over her child and swerve to the right (in order to protect her child and take the impact of an oncoming car herself) vs a teenager swerving to avoid hitting a dog that darts into the street but ends up hitting or endangering people playing on the sidewalk, instead.
That Urmson thinks philosophers believe people make these kinds of decisions on the fly, and that’s what the trolley problem is about, highlights his lack of familiarity with the idea of not just the trolley problem, but applied philosophy—which should worry anyone who is watching not only the development of self-driving cars, but Google’s continued branching out into all aspects of our increasingly wired lives.