Richardson & Almeling on the CDC’s Pre-pregnancy & FASD “Guidelines”

Although it’s not the first thing you learn in ethics, the idea that you’re not going to be popular probably should be; it really does make life a lot easier. After all, a large part of the job of the ethicist is to be unpopular:

  • no, you can’t modify that flu virus so that it’s more contagious and more deadly than the lovechild of smallpox and the Spanish flu;
  • yes, it’s okay that this person wants to die;
  • no, you can’t just put fecael microbes in open brain wounds;
  • sorry, no, the science doesn’t support your claim;
  • who will the car hit;
  • you fired everyone NOW;
  • does the benefit justify risk; and so on.

You get the idea.

So I wasn’t terribly surprised to face the typical backlash when I noted just how unscientific, shaming, stigmatizing, and plain wrong the CDC’s recent “treat every woman1 as pre-pregnant”2 declaration that no person with a uterus should drink unless 100% certain there’s no uterus-crasher in residence was–but it’s always nice when folks who have the respected PhD after their name (and are at Ivy League universities) join the chorus.

I recommend reading Richardson and Almeling’s op-ed in it’s entirety, but here are the choice pieces:

The CDC’s overly broad advisory damages its credibility as a source of clear, balanced advice about health risks. A risk may be “real,” but it may not be large or well substantiated. The CDC claims that “drinking any alcohol at any stage of pregnancy can cause a range of disabilities” for a woman’s child.” Yet a balanced review of the scientific evidence does not support such unequivocal claims. In fact, medical research suggests just the opposite. For example, the Danish National Birth Cohort Lifestyle During Pregnancy Study demonstrated that moderate drinking during pregnancy carries no long-term risks.

CDCBeClearFirst, the CDC needs to be clear that science on the risk of alcohol during pregnancy is far from settled. Any advice about reproduction should respect the autonomy and intelligence of women by presenting evidence in its full context. Public health officials should provide perspective about the size of the effects relative to other common risk factors. And they should be straightforward in describing the evidentiary base for health advisories.

The CDC can regain credibility in this realm by providing information to women and men that details the relative risks of various behaviors, as well as the state of scientific debate regarding the evidence supporting these assessments

The CDC’s mission is to identify and address clear and present dangers to the public health. As such, their credibility is literally a matter of life and death … Issuing guidelines with all the nuance of a sledgehammer only damages the public’s trust in federal health recommendations.

There are possible risks to drinking while pregnant, and women should be told what those risks are. But they’re not clear-cut, they’re not well-understood, and there’s no guarantee that abstaining from alcohol means a baby won’t be diagnosed with Fetal Alochol Spectrum Disorder; like many disorders, it’s a diagnosis of exclusion, and the criteria for diagnosis does not require confirmation of alcohol consumption during pregnancy (and in fact, at least one paper in Pediatrics suggests that if a woman has a child diagnosed with FASD and says she abstained during pregnancy, she must be lying about her drinking habit).

But there are a lot of risks to women while pregnant, and unless you’re advcating that women be padded in bubblewrap and never let outside of a padded room while pregnant (which in itself is probably a risk for something), then pregnancy, like life itself, is about balancing risks, benefits, and rewards. In order to make decisions in an accurate risk/benefit analysis, women first need to know what the science, not a sledgehammer of paternalistic unscientific fearmomgering.


One Key Question: Why “Would You Like to Become Pregnant in the Next Year” is a Bad Idea

Note: I wrote this last year when the One Key Question initiative in Oregon was being discussed, and pitched it to an appropriate publication. Unfortunately, the editor of that publication somewhat maliciously string me along and sat on it until it was no longer timely, and it’s been sitting in my sads folder since. With the recent CDC recommitment to the notion of pre-pregnancy, I decided this should at least be published on my blog.


A “simple, routine question” advocated by the Oregon Foundation for Reproductive Health is a great way to alienate and further disenfranchise women who are childfree.

A new piece on Slate discusses one of the most alienating ideas I’ve read in a while, and I wrote about the Hobby Lobby SCOTUS decision last week. In a nutshell, it argues that for effective and proactive reproductive health care needs, primary care physicians should ask a woman, at every visit, if she would like to become pregnant in the next year.

OKQOn the surface, the One Key Question Initiative, by the Oregon Foundation for Reproductive Health, may seem like a good idea. Many women have access to a primary care provider, but do not see OB-GYNs with any regularity. And of course, discussing reproductive and contraception options with a patient should be a basic of yearly, preventive, or wellness exams. The problem is not in discussing reproductive and contraception options, but instead in the framing of the question: would you like to become pregnant in the next year?

If I heard this from my doctor at every visit, I would change doctors. I expect my doctor to listen to me, and expect my doctor, after the first time I explain that I am childless by choice, to respect my decision. Asking me, repeatedly, if I would like to become pregnant in any time frame ignores my stated preference and decision. It falls into the cultural stereotype that women must want children, and that if they’re asked enough, if they get old enough, if they just meet the right man, they’ll change their minds.

Statistics indicate that I’m not alone in my desire to not have children. In fact, a third of women in the “acceptable childbearing age” bracket of 20-44 don’t have children,1 and 20 percent of women won’t have children.2 Many of these women are involuntarily childless, either for medical reasons or circumstance, but a recent survey by DeVries Global suggests that as many as 36 percent of those who are childless are voluntarily childless.3 As such, medical appointments should not be used as an opportunity to emphasize the stigma of the choice not to have children.

And make no mistake: there is still a significant stigma to choose to not have children. (One of my favorite paper titles ever is “Women without Children: A Contradiction in Terms?”) Women are judged for not having children; they are selfish, immature, refuse to grow up. The crazy cat lady has become a modern boogieman to scare women with. Headlines scream “The Trend of Not Having Children is Just Plain Selfish” (The National Post), women are assured it’ll be different when it’s your child, and assured they will regret their choice to remain child-free. Some of these beliefs are so deeply engrained into culture that women under the age of 30 have a difficult time finding doctors who will tie their tubes; a persistent, paternalistic attitude that doctors know better than women about their reproductive desires, which Slate itself covered in depth in 2012.4

We’ve had this conversation before, when 2006 federal guidelines resulted in women of reproductive age being labeled “pre-pregnant” and treated as if they could fall pregnant at any moment. As bioethicist Rebecca Kukla noted, the idea of pre-pregnancy literally treats the non-pregnant body as on its way to pregnancy, with non-pregnancy seen as a fleeting and temporary state; it also reinterprets primary care for women into reproductive care.5 The One Key Question Initiative brings us right back to the pre-pregnancy focus on what some people have dubbed “bikini medicine” – all attention on a woman’s reproductive organs first and foremost – creating a strong pro-natalist, coercive discourse about women’s healthcare, and shifting the focus to future outcomes (pregnancy and children) rather than the immediate patient at the appointment.

This is not to say that the ultimate goal of the One Key Question Initiative, to “ensure that more pregnancies are wanted, planned, and as healthy as possible,” is wrong. In fact, I firmly come down on the side of every child a wanted child, and as authors Julie F. Kay and Michele Stranger Hunter note, “about 85 percent of couples not using contraception will become pregnant in the next year, whether they intend to or not.” Primary care physicians should ask their female patients about childbearing and reproduction; the physician should know the patient preference and note that in her chart. In following visits, it’s more than acceptable to ask a woman who indicated she is not interested in bearing children if her contraceptive choice is working as desired, if there are any side effects, even if the woman wants to make any changes to that contraception. What isn’t okay is to make “would you like to become pregnant in the next year” a mandated question operating from a presumption that pregnancy is always a possibility on the horizon.


American Thoughts on Australia Day & Acknowledgement of Country

Yesterday/today (the 26th of January; time is a weird thing when you’re straddling the dateline) is Australia Day, also known as Invasian Day–it’s a day of celebration akin to the drunken antics of Americans on the fourth of July for European Australians, “settlers,” and a day of mourning for Aboriginal Australian and Torres Strait Islander communities who see it as a day of invasion and subsequent struggle to survive. So basically, the partying of the Fourth of July mixed in with Thanksgiving–after all, the European Australians did much the same to the Aboriginal Australians and Torres Strait Islanders as Americans did to Native Americans.

White folks, we aren’t so great at respecting other cultures.

Survival Day is becoming a common reference instead of Australia Day, but it seems like a general preference is still to separate out BBQs and beer from remembering genocide. (Click image to enlarge.)

Survival Day is becoming a common reference instead of Australia Day, but it seems like a general preference is still to separate out BBQs and beer from remembering genocide. (Click image to enlarge.)

And the thing is, it’s not like the Indigenous Australians aren’t down with celebrating Australia–they are, after all, Australians, too. It’s just they’d really like it if perhaps the party could be held on not the same day that commemorates mass slaughter and attempts at cultural eradication that still go on today.

Anyhow, you should read this article over at Buzzfeed and watch the embedded video, below. But what I wanted to talk about was something that I saw people doing online: identifying the land they woke up in. This seems to be a variation on the Acknowledgement of Country that happens at a lot of official events, and it’s one where individuals, yesterday, were acknowledging the historic people of the land they live on:

This, I thought, was neat, and a way of showing respect to people who you yourself may not have harmed, but your ancestors did harm, by virtue of their participation in the forming of the place called Australia–or America.

I thought I’d compile my own list of the Native American lands I’ve lived on in my time floating across the United States; what I didn’t imagine was that it would take me several hours to track this information down. After all, I grew up attending Ohlone events in the San Francisco Bay Area, and doesn’t everyone know that the Duwamish were the historical peoples in the Greater Seattle area?

Except that the Ohlone, formerly the Costanoans, didn’t view themselves as a single “Indian tribe” but a loose group of about 50 distinct landholding tribes or bands who shared a similar language, religion, and culture but saw themselves as distinct. They, like many other Native peoples, were squished together into readily identified tribal groups by the United States government during its long period of sucking, and trying to find out the specifics of the folks who lived in a specific area rather than the region (so I coul answer the equivalent of “Philadelphia” instead of “the mid-Atlanic”) proved…frustrating. A lot of this is because by the time anyone in America had the idea that maybe they should record this information, the people were dying or dead; many of the last speakers of languages, the last of their group, tribe, people, were dying in the late 1800s to 1920s, and American society was set on eradication of tribal groups. The disappearance of this knowledge was just fine with most.

So it is with some struggle and uncertainty that I can say I have lived on the lands of the following people:
The Muwekema Ohlone (Alson, Seunen, Luecha, and Puichon)
The Numa, Washeshu and Newe People
The Kalapuyan Peoples (Chelamela and Tualatin)
The Multnomah People
The Duwamish Tribe (Skagit-Nisqually/Lushootseed)
The Iroquois League/Haudenosaunee (Mohawk Nation)

And this morning, I woke up on Lenni Lenape (Unami dialect) land.

I don’t really have anything quippy to say here in finish. I think that the way we–Americans and Australians–handle our commemoration of events is painfully white and alienating, that we casually erase history with no thought to the pain it causes people who call that history their own. I think that it’s a shame we have to repeatedly have conversations about whether or not it’s a problem to have sportsball teams named after racist slurs, that we set up parties on days of massacres, that we celebrate the slaughter of millions with mattress sales and BBQs, and that we can’t get it around our heads to treat the other folks we live with, folks of colour, with the respect we want for ourselves every day.

Knowing the names of the tribal lands I have lived on won’t change any of that, but at least it allows me to move a bit closer to an ideal of mindfulness and respect that I think we should all strive towards.

fully autonomous, self-driving cars and disability

I'd have some variation on this view every damned day, I am so not even kidding.

I’d have some variation on this view every damned day, I am so not even kidding.

Ah, driving cars. If I had a dollar for every time someone told me that I’m going to get my freedom back,1 I’d retire to Barbados and sip delightful rum drinks all the rest of my days. The most common version of this tends to include The Oatmeal’s exciting comic of the awesomeness of autonomous cars, including the heartfelt wish for his mother to be able to get around independently again. “Look Kelly, aren’t autonomous driverless cars fantastic? You’ll be normal again!”2

Oh, so many things to unpack in what is generally a well-intentioned, but ultimately irritating, statement.

First and foremost, let’s be clear about this: autonomous cars are not being developed for the disabled. Oh sure, the disabled may eventually benefit, but they’re not the target. For one thing, the pay gap between the working disabled vs able-bodied workers is huge–in some states, up to 37% less, and that gap persists regardless of education attained. The fully disabled are often among the poorest people in American society.

People who earn a lot less than average, people who are often in the lowest socioeconomic bracket, are not the people who are targetted with shiny new technological advances…like self-driving, fully autonomous cars.

But let’s wave our hands and put that aside, and say we live in a magical world where this isn’t an issue–Bill Gates and Mark Zuckerberg decide to team up to make sure every single disabled person in America has access to one of these awesome new cars.

There’s still the steering wheel.

In education, where Texas goes, textbooks go–it’s why the legal debate over what’s inside those bindings gets so much coverage. Texas is a massive market for textbooks, and it’s easier to build to that market and push the results on others than to try to do something different for other states.

California is kind of the Texas of technology, and California has said newp, fully autonomous cars must still have brakes that a driver can operate, and a steering wheel a driver can override and control. Not very easy to do if you’re blind, if your foot doesn’t work, if you can’t rotate to look over your shoulder, or all the other reasons people are no longer able to drive.

Comic by xkcd/Randall Munroe.

Comic by xkcd/Randall Munroe.

And it’s not just California that’s cautious. A brand new study by Volvo shows that 92% of folks? Believe they should be able to take over control of an autonomous car at any point.3

All of which means that anyone who wants a fully autonomous self-driving car is going to be able to afford the car and be able to drive it normally. That’s going to exclude all those disabled folks who aren’t driving because of their disability.

But those facts aren’t really the whole of it, or the worst of it. The whole, the worst, is this: people, whether they’re companies or tech evangelists, are selling a false promise. The whole “this is going to revolutionize the life of disabled people” is selling the public a bill of goods and being used to generate positive feelings about new technology; I can almost guarantee any advertising we see will be warm, fuzzy, and all about family and “regained ability.”

People who are disabled, disability itself, is being used to sell the concept of self-driving, autonomous cars to able-bodied folks, when the reality is, they’re at the last of the groups who will benefit from these technologies.

“We’ll save the disabled people” is not only irritating, it hurts.

If you want to help disabled people–people like me–have better access to the community around them, advocate for better transit, better walking and biking communities, and easier and cheaper access to paratransit.4 Don’t use my inability to drive (ironically, because of a car accident) to feed or feel better about your desire for novel technologies.

With thanks to Bethany, for understanding.


Google demonstrates people dismiss philosophy when they don’t understand it

This is pretty much how I generally feel about the trolley problem. (Comic by xkcd.)

This is pretty much how I generally feel about the trolley problem, and it makes me cranky to end up defending it so often. (Comic by xkcd.)


Earlier this week, Chris Urmson, chief of Google’s self-driving cars project, made a pretty big mistake for someone so high up at Google: he dismissed philosophers and the trolley problem as irrelevant to self-driving cars.

Now, people dismissing philosophers as irrelevant isn’t terribly unusual (see: Pinker, deGrasse Tyson, Dawkins, et al). But it is a bit unusual to see Google make such a novice mistake, especially a representative so highly (and publicly) placed in the company. Speaking at Volpe, National Transportation Systems Center in Cambridge, Massachusetts, Urmson said:

It’s a fun problem for philosophers to think about, but in real time, humans don’t do that. There’s some kind of reaction that happens. It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.

It’s pretty rare to see someone so clearly make a mistake like this, but I’m not one to make light of gift horses. So yes, technically, Urmson is right: no one (not even those useless philosophers) think that your mother is carefully evaluating all options when she slams on her brakes and throws her arm across your chest, pinning you to your seat, even though you’re 32 years old. No one actually thinks a driver is thinking “hmm, that moose is headed right for me, but there are a bunch of kids waiting for the bus stop where I’d normally veer, so should I go ahead and hit that moose and avoid the kids, or should I swerve towards the kids and hope I can miss them all? and then weighing out costs, risks, benefits, and so on. That’s just silly.

Furthermore, as Ernesto Guevara noted on Twitter, while Urmson is quick to dismiss philosophy and ethical decision-making, he goes on to discuss the engineered decisions on who a Google car will try to avoid hitting: vulnerable road users (think pedestrians and cyclists), then other vehicles on the road, and lastly things that don’t move. Hmm. A hierarchy of decisions about who a car will or will not hit… sounds an awful lot like moral decision-making to me.

Which brings us to what the trolley problem is: it’s an acknowledgment of the fact that people make these decisions, it’s not an example of how people make these decisions. By trying to figure out how people react to choosing to hit one person vs five, or pushing a person to their death vs pulling a lever1, and other permutations of the thought experiment,2 philosophers are trying to detangle the instantanious “some kind of reaction” that Urmson just vaguely dismisses.

Yes, drivers just react in the moment, but those reactions a driver makes–and how we think about them when faced with thought experiments–are reflections of values, culture, society. Trying to discern the nuances of those values and how we get to them is at the heart of the trolley problem, and in the case of those who are building self-driving cars, it seems like it would be a pretty good idea to understand why a mother might throw her arm over her child and swerve to the right (in order to protect her child and take the impact of an oncoming car herself) vs a teenager swerving to avoid hitting a dog that darts into the street but ends up hitting or endangering people playing on the sidewalk, instead.

That Urmson thinks philosophers believe people make these kinds of decisions on the fly, and that’s what the trolley problem is about, highlights his lack of familiarity with the idea of not just the trolley problem, but applied philosophy–which should worry anyone who is watching not only the development of self-driving cars, but Google’s continued branching out into all aspects of our increasingly wired lives.