Google demonstrates people dismiss philosophy when they don’t understand it

This is pretty much how I generally feel about the trolley problem. (Comic by xkcd.)

This is pretty much how I generally feel about the trolley problem, and it makes me cranky to end up defending it so often. (Comic by xkcd.)


Earlier this week, Chris Urmson, chief of Google’s self-driving cars project, made a pretty big mistake for someone so high up at Google: he dismissed philosophers and the trolley problem as irrelevant to self-driving cars.

Now, people dismissing philosophers as irrelevant isn’t terribly unusual (see: Pinker, deGrasse Tyson, Dawkins, et al). But it is a bit unusual to see Google make such a novice mistake, especially a representative so highly (and publicly) placed in the company. Speaking at Volpe, National Transportation Systems Center in Cambridge, Massachusetts, Urmson said:

It’s a fun problem for philosophers to think about, but in real time, humans don’t do that. There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.

It’s pretty rare to see someone so clearly make a mistake like this, but I’m not one to make light of gift horses. So yes, technically, Urmson is right: no one (not even those useless philosophers) think that your mother is carefully evaluating all options when she slams on her brakes and throws her arm across your chest, pinning you to your seat, even though you’re 32 years old. No one actually thinks a driver is thinking “hmm, that moose is headed right for me, but there are a bunch of kids waiting for the bus stop where I’d normally veer, so should I go ahead and hit that moose and avoid the kids, or should I swerve towards the kids and hope I can miss them all? and then weighing out costs, risks, benefits, and so on. That’s just silly.

Furthermore, as Ernesto Guevara noted on Twitter, while Urmson is quick to dismiss philosophy and ethical decision-making, he goes on to discuss the engineered decisions on who a Google car will try to avoid hitting: vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly things that don’t move. Hmm. A hierarchy of decisions about who a car will or will not hit… sounds an awful lot like moral decision-making to me.

Which brings us to what the trolley problem is: it’s an acknowledgment of the fact that people make these decisions, it’s not an example of how people make these decisions. By trying to figure out how people react to choosing to hit one person vs five, or pushing a person to their death vs pulling a lever1, and other permutations of the thought experiment,2 philosophers are trying to detangle the instantanious “some kind of reaction” that Urmson just vaguely dismisses.

Yes, drivers just react in the moment, but those reactions a driver makes–and how we think about them when faced with thought experiments–are reflections of values, culture, society. Trying to discern the nuances of those values and how we get to them is at the heart of the trolley problem, and in the case of those who are building self-driving cars, it seems like it would be a pretty good idea to understand why a mother might throw her arm over her child and swerve to the right (in order to protect her child and take the impact of an oncoming car herself) vs a teenager swerving to avoid hitting a dog that darts into the street but ends up hitting or endangering people playing on the sidewalk, instead.

That Urmson thinks philosophers believe people make these kinds of decisions on the fly, and that’s what the trolley problem is about, highlights his lack of familiarity with the idea of not just the trolley problem, but applied philosophy–which should worry anyone who is watching not only the development of self-driving cars, but Google’s continued branching out into all aspects of our increasingly wired lives.


An Ebolanoia Anniversary–Or, The Emperor’s [Lack of] Disclosures

It’s the Ebolanioa anniversary! Over at Slate, Tara C. Smith takes us through a quick walk down memory lane, and the utterly outsized reactions and political theatre America went through a year ago: quarantines and threats and Daesh-licking doorknob villains, oh my.

Hulk-hits-Thor-after-defeating-alienOne thing still sticks in my craw: the utterly ludicrous suggestion from respected epidemiologist Michael T. Osterholm that we were all just afraid to talk about Ebola becoming airborne, but it was a real threat. Even though multiple, well-respected virologists and Ebola experts immediately corrected Osterholm’s panic piece, the panic piece is what took life, with other news outlets repeating him word-for-word–and few people questioning why such a respected epidemiologist would even propose such an outlandish thing, let alone in the pages of a New York Times op-ed rather than in a respected, peer reviewed publication.

While it pains me to point this out, because Osterholm was quite complementary of my anthrax- and NSABB-related posts, someone has to play the fool and point out the emperor has no clothes. Or in this case, the emperor has a pretty glaring conflict of interest, neatly laid out for all to see if they just take a look:
CIDRAPOct62015

Do you see it?
CIDRAPOct62015-Highlighted

3M, a “leading underwriter” of CIDRAP, where Osterholm is (and has been) director, is also a leading manufacturer of N95 masks. The sort of mask used for personal protective equipment if you’re treating a patient with an airborne infectious disease. The sort of mask that is typically advocated by those who have more than a little paranoia when it comes to disease in general.1 The sort of mask all over this National Institute for Occupational Safety and Health website.

Look, I completely understand the need for funding journalism, and as a whole I really enjoy CIDRAP’s reporting.2 Having been in publishing and journalism for over a decade at this point, I understand the need for funding, and just about everyone knows I have no lost love for the University of Minnesotta in general. But when you accept funding from outside sources, you have to start thinking about how that funding influences what you think, support, advocate for and write about. We know that it doesn’t take much to subtly, subconsciously, or consciously influence opinions, and major funding from a source of masks that would block airborne Ebola? That’s a pretty big conflict of interest that should be disclosed in any “but what about mutations” panic discussions in the public sphere.3


Addendum
It’s been brought to my attention that Osterholm et al’s mBio opinion piece, which I didn’t directly refer to here but waved a whole bunch of shade at, was amended in April of this year to “address” perceptions of conflict of interest. Unfortunately for CIDRAP and Osterholm et al, this attempt at damage control is pretty piss-poor. Their objection to being called out on the 3M conflict of interest boils down to what we’ve heard in other situations: the money goes into a giant pot at the university and we don’t know what dollars from them affect us, and besides, it’s unrestricted and they have no say!

Well. Except that if, per CIDRAP’s donation page, only 2% of their funding comes from the University proper, and they know who gives what to such a specific degree that they can list The Benson Foundation as a principle underwriter and 3M as a leading underwriter, then you can’t really say that “it just all goes into a pot and we don’t know which particular dollars 3M touched.” Because what you do know is that if 3M hadn’t touched a significant chunk of the money in that pot, it wouldn’t be there.

You, as an individual, know if you have $30 or $100 in your wallet, and you definitely know if $70 of the $100 came from a particular place. Trying to claim that a business that requires their donated money to function has no operational knowledge of where the money comes from is insulting to basically everyone’s intelligence.

The mBio amendment also attempted to claim that since they don’t talk about respirators in the piece, certainly they can’t be relevant to a piece talking about fears of an airborne mutation. I leave this to the audience: Do you think respirators are relevant, at all, to protection from airborne disease, even if not directly mentioned in an opinion piece? Hmm.

Look, it’s a common misunderstanding that noting a conflict of interest is akin to admitting guilt or bribery or corruption. It doesn’t have to be like this, and this perception exists in large part because so many people try to pass off their COI as no big deal. But the literature has shown, time and again, that it is a big deal, and that no one is immune from the influence that things as little as pens or as big as unrestricted checks can have on perceptions. If you-the-scientist want us-the-reader to give weight to your opinion paper that, say, Ebola might mutate to become airborne and ZOMG, then perhaps you-the-scientist should give weight to the multiple peer-reviewed papers that say your center funding presents a conflict of interest that requires a necessary disclosure.


Teens Think Up Ethically Questionable STI Detection Method

Because there’s nothing the media likes more than a good look at all these teeny genius stories (except maybe catching a politician with their pants down, figuratively or literally), the primary non-Supreme-Court-ruling-on-Obamacare story making the rounds right now is the S.T.EYE condom, developed by three teenage boys from the Isaac Newton Academy in London. The S.T.EYE changes colour* when it comes in contact with a sexually transmitted infection (hence the name), which is being heralded as ground-breaking, revolutionary, disruptive technology–of course, since this was a TeenTech entry and winner. One of the young inventors, Daanyaal Ali, 14, says that they created the S.T.EYE because they wanted “to make something that made detecting harmful STIs safer than ever before, so that people can take immediate action in the privacy of their own homes without the often-scary procedures at the doctors.” (And as an aside, I find it fascinating that the teenagers want to reduce the embarrassment of going to clinics and emphasize “the privacy of your own home” line so frequently used by DTC advocates without stopping to consider the major hurdle of: duckie-pretty-pink--large-msg-131051452728if you’re putting the condom on to have sex with someone(s) else, then you’re not alone and we’re right back to the embarrassment-only this time it’s in the privacy of your own home while you’re naked in front of someone you want to have sex with. Which kind of sounds like it should be the beginning of an 80s movie starring Jon Cryer.)

This is an ambitious goal, and it’s laudable that teenagers are behind the idea; a validated, direct-to-consumer, at-home STI test that is inexpensive and accurate would be a great addition to public health. I’m not sure anyone can disagrees with that. The problem here is in implementation: the teens envision their test being wrapped up in a condom, which means that at least one person’s STI test will be revealed after-the-act, rather than before. While you may think “great, post-exposure notice,” it’s not that simple. What happens, for example, to informed consent? You’re talking about revealing whether or not someone has an STI to not just themselves but their partner(s)-at-the-moment; is there something that clearly identifies these as S.T.EYE condoms? Will it be impossible to miss that this condom your partner is providing (or that you provide a partner) will glow in the presence of an STI, regardless of who is infected? Will all parties need to sign a contract honoring the privacy of all participants before opening the condom? Will it come with a EULA-disguised-as-informed-consent? Remember, informed consent can’t be coercive-is right before having sex the bestest time in the world to assure non-coerced consent?

And other difficulties: how do you know who “triggered” the glow? Presumably it glows the same “uh-oh” colour regardless of which side of the condom the exposure occurred on. What if there are multiple partners? Are they considering impregnating dental dams with the same technology to include lesbians, or is this only for penis-based sex? Will the antibodies in the condom react to antigens in saliva? Can you still use lube? But, with exception of exactly who this condom is being created for, those issues are largely technical issues, and they’ve been well-deconstructed elsewhere.

Autonomy to make decisions, ability to consent after given full information, privacy of medical information; these are all pretty basic medical ethics 101 concepts, and they’ve been ignored here. I certainly don’t fault a trio of teenagers for that-but I can and do fault the teachers who encouraged them to pursue this line of thought, as well as the people who awarded it as innovative (not to mention the non-critical journalists breathlessly reporting it).

Innovation is the lifeblood of the technology industry; I understand that. My father had his own technology business when I was a kid. I was part of the technology innovation industry for a decade, and played a role in disruptive medical technologies.1 I was raised in Silicon Valley before it was Silicon Valley; I do understand, and I have no real interest or desire to stifle creativity, innovation, or even disruption. The problem is that in the pursuit of “can” at all costs, “should” is being left in the dust. Should we develop condoms that glow on exposure to STIs? Maybe not. Why? Well, how about the scenario where someone is killed for exposing someone to an STI? How about the person who is too embarrassed or ashamed to get help and kills themselves? What about the person who is shamed across their community, online, for having an STI, with a glowing condom as “proof”? And why do I even have to reach for such dramatic examples of “maybe we shouldn’t do this,” when “how do you manage consent” is such a present and problematic issue?

Of course, this isn’t just about innovation-it’s also about disruption. And the disruption here isn’t the idea of color-changing-upon-STI-detection condoms, but who is having the idea: teenagers. These teens are envisioning medical devices that are conveying diagnostic information, and they’re doing so outside of the normal channel that we expect medical devices to be developed in. That means the typical standards that are in place for medical device development (specificity, standards, quality control, labeling requirements, documented risks, etc) haven’t been addressed (or likely even thought about). To quote Nick, “In the world of distributed technology, these things are increasingly up to the individual, and we need to start adopting an “ethics of design” into our disruptive tech scene.” In other words, this project shouldn’t have made it to a technology competition without serious consideration given to plausibly, specificity, testing standards, and the other things that, presumably, the condom manufacturing company that has partnered with these teen boys will now take on.

It’s fantastic that there are challenges like TeenTech to encourage teenagers to pursue STEM-related careers. However, we need to make sure that we are setting the foundations to good research by teaching all aspects of research and development, including ethics, and make sure that our enthusiasm for encouragement doesn’t overshadow the necessity of ethical oversight.


*26 June 2015: Just a quick clarification, based in Maggie’s nice comment below: I want to stress that the idea the teens presented at TeenTech are conceptual ideas, not actual prototypes or working models. I do think that this stage is definitely where the “ethics of design” needs to be built in, but presumably–hopefully–that dialog will continue happening as the teenagers partner with a company to see what happens in science after you have that initial “oh hey” idea.

Apple Updates HealthKit’s Ethics Requirements–But Don’t Celebrate Just Yet

In the on-going drama of Apple’s ResearchKit and its lack of conforming to modern expectations regarding human subjects research, Apple has updated the guidelines for apps “using the HealthKit framework or conducting human subject research for health purposes, such as through the use of ResearchKit,”1 requiring “approval from an independent ethics review board.” At first blush, this seems great–one of the bigger problems raised when Apple debuted Health/ResearchKit in March was that there didn’t appear to be any nod to or concession towards the necessity of ethical oversight of human subjects research, a conversation that’s been growing louder over the years, especially as Silicon Valley has become more interested in the potential “killer app” money behind health care products.

Unfortunately, a closer read of the actual guidelines shows that there’s still a lot to be desired, and Apple really needs to actually bring in someone familiar with medical ethics and health policy to help them not only with the language of their guidelines for apps, but also to review any app that wants to utilize the HealthKit framework or use ResearchKit for health-related research.

ResearchKit-HSR-April302015The revised guidelines can be read here; a snapshot of section 27, HealthKit and Human Subject Research, taken on April 30, 2015, can be seen to the right (click to embiggen). The particular language regarding ethics review boards is at the very end:

27.10 Apps conducting health-related human subject research must secure approval from an independent ethics review board. Proof of such approval must be provided upon request.

Obviously, the first and largest problem here is that proof of ethics board approval isn’t required, it merely needs to be available upon request, but a tumble of questions spill forth from that:

  • Who will have the capability to request to see this paperwork?
  • Can end users say “I want to see the ethics board approval?”
  • What is going to trigger Apple wanting to see this paperwork?
  • Who’s going to make sure that there was actually approval, rather than just submission? It’s not like it’s unusual for companies to try to fly under regulatory radar and sell products or services that haven’t been approved for their specific use (see: 23andMe, LuSys Labs).
  • Who at Apple is qualified to know that the ethics approval was granted by a legitimate, registered institutional review board (IRB)? (Does Apple even know how to check this information?)
  • Is Apple’s use of “independent ethics review board” an acknowledgement of outside-the-US names (where “Research Ethics Committee” or “Independent Ethics Committee” are more frequently used), or is this a way to dodge the requirement of use of an IRB, which does have specific and legal meaning within the USA?
  • What level of paperwork is Apple expecting app submitters to have for IRB approval? (Will they need to show the full paperwork filed? Will Apple be policing that paperwork to make sure it was what was necessary for the app’s purpose? Will they require meeting minutes? A one-page sign-off from an IRB?)
  • Precisely what qualifies an ethics review board as “indepdenent”?
  • Uh, what is “health-related” research, anyhow?
  • If the ethics review board says “this isn’t something that needs our approval, so here’s a waiver,” will Apple accept that as “approval”? (Because technically, that’s not approval.)

And of course, separate from this is the fact that currently, research (at least within America) only requires IRB oversight if money for that research is coming from the federal government. While yes, it’s true that all legitimate academic journals will require that the research was approved by an IRB and followed the conventions of the Declaration of Helsinki, not everyone is doing research with an eye towards publication within a peer-reviewed journal. This means that anyone doing HealthKit or ResearchKit work who is not embedded within an academic institution that has access to an on-site IRB will have to pay a for-profit IRB to review the app design and research goals – will Apple be looking for proof of payment? (And of course, that assumes that Apple will consider a university IRB “independent.” ResearchKitHealthKitAppleI’m relatively sure Carl Elliott would have some choice words about that particular assumption.)

All in all, this-the entirety of section 27, to be frank-reads as Apple scrambling, post-debut, to mollify the science journalists and media-savvy ethicists who have been honest and critical about Apple’s failures to understand even the most basic aspects of protecting the subject in human subjects research. It doesn’t actually seem to indicate Apple understands what is actually required from those doing human subjects research, only that Apple lawyers seem to be aware that there is a serious potential for a lawsuit here, and thus are trying to figure out how to best cover their corporate asses.