Ethics and Materiality

Indeed, there is no body as such; there are only bodies – male or female, black, brown, white, large or small – and the gradations in between. Bodies can be represented or understood not as entities in themselves or simply on a linear continuum with its polar extremes occupied by male and female bodies… but as a field, a two-dimensional continuum in which race (and possibly even class, caste, or religion) form body specifications.
-Elizabeth Grosz

In contrast to the body, embodiment is contextual, enmeshed within the specifics of place, time, physiology, and culture, which together compose enactment. Embodiment never coincides exactly with “the body,” however that normalized concept is understood. Whereas the body is an idealized form that gestures toward a Platonic reality, embodiment is the specific instantiation generated from the noise of difference.
-N. Katherine Hayles

It has occured to me, over the course of reading Hayles’ book How We Became PostHuman: Virtual Bodies in Cybernetics, Literature, and Informatics, that the field of ethics, and specifically of bioethics, is all about realizing the data made flesh. Or, to be less obscure, it’s about realizing that while we’re all individuals, we also are all connected with one another. The arguments about multiple and conflicting autonomies make no sense if you take the modernist concept of each of us being a separate and unattached beings. Likewise, the postmodernist, disembodied concept of self also has very little play, because beneficience (and again, autonomy) is often tied to a physicality that postmodernity prefers to ignore. It’s when we get to this material poiesis, this materiality of data made flesh, that we have a system that acknowledges both the physicality of the body and the connectivity of the, for lack of better word, soul, or self.

Vision Does Not Require Technology

A large part of the charm in Vannevar Bush’s paper As We May Think is reading a 60-odd year old article and identifying the technology he predicted. Polaroid and digital cameras, virtual reality glasses, the TCP/IP protocol, cochlear implants, hard drives and eBook readers are a sample of ideas that could be read and extracted out to what we have today. (For example, take this passage:

Is it not possible that we may learn to introduce them [sounds into the nerve channels of the deaf] without the present cumbersomness of first transforming electrical vibrations to mechanical ones, which the human mechanism promptly turns back to the electrical form? With a couple of electrodes on the skull…

It is an abstract of cochlear implants.)

What really struck me about Bush’s article was not so much the ability to predict technology, (science fiction has done that for years), but that it clarified something that has been floating in the back of my head for a while now: technology is always behind ideas. To really illustrate what I mean, I’m going to switch over to a brief history of the microscope and germ theory.

Glass grinding for lenses reached a crucial point of advancement in the late 17th century, and people were able to take magnifying glasses to the next level, that of microscope. And as soon as people began looking under the microscope, it became clear that smaller things existed. What were these smaller things? Animacules? Were they alive? What did they do? Were there things smaller than the flea, pet of early microscopic viewing? Some people began to speculate on this, and advanced a theory that these smaller than the naked eye animacules were really the cause of disease, instead of internal putrifaction or devils-as-punishment. But although it was possible to see some things, it wasn’t possible to see down to the level of viruses and bacteria. So although the ideas of germ theory and contagion were first proposed in the 1600s, it took another 200 years for the idea to really catch hold and be advanced.

Why 200 years? Because that’s how long it took to advance optics to the point of being able to see viruses and bacteria.

What we see in Vannevar Bush’s article is that ideas are able to be dreamt up long before the technology is actually in place to make the idea real. Much like Star Trek’s communicators laid down the path for cell phones some 40 years later, As We May Think laid the tracks for many different technologies to come. Bush was still limited in his vision by the constraints of his time (imagining that large rooms of women and punchcards would manipulate these mega-machines, for example), but much like those early micrologists who saw the first glimmer of possibility in the microscopic eye, he was able to take the limits of the time and extrapolate out to the possibility of the future.

A Brief History of Medical Knowledge

The Doctor’s Decalogue

For in ten words the whole Art is comprised —
For some of the ten are always advised:
Piss, Spew and Spit,
Perspiration and Sweat;
Purge, Bleed, and Blister,
Issues and Clyster.
– Edward Baynard, M.D. 1719

The body of medical knowledge has existed in three distinct phases. The first phase would stretch from the beginnings of history to about 450 BCE, the time of Phythagorus and Hippocrates. What we now consider Hippocratic Medicine took for granted that disease is caused by natural subjects and natural law (that the world is ordered and governed in a certain way). No one really knows why the Greeks suddenly shifted to this natural law, but it’s been the basis of our medical thinking ever since.

Pre-Hippocratic medical knowledge was interpretted in strictly supernatural terms, while Hippocratic medicine saw illness as a practical matter. The big differentiation here is what caused disease; Asclepian medicine assumed that all disease was a spiritual matter; you had made Asclepus unhappy, pray to him to heal, et cetera. Hippocratic medicine, on the other hand, took the effort to make medicine scientific; it assumed that you could understand and explain disease by natural law. The Hippocratic medical literature also developed procedures of examination that would not be significantly expanded on until the early 1800s.

In fact, the next major era of medical knowledge came about only a few hundred years after the advent of Hippocratic medicine, with the proliferic Galen. Until the mid-1500s, all knowledge of how the body worked came from Galen’s discections of pigs, Barbary apes, and cows. Looking at his anatomical drawings, it very clear that the only time he saw the inside of a human body was in the aftermath of battles. Regardless, his proliferic publication of material and his sheer intelligence made him the authority in medicine for the next 1000+ years.

Towards the middle of the 16th century, this steadfast belief in Galenism began to change, largely with the advent of the scientific revolution. People began to see that an understanding of nature is obtained not from authoritative texts but by observation, experimentation and quantitative reasoning. Medicine slowly became a scientific activity, one where you do and experiment and learn for yourself, as opposed to book-learning. (As an aside, there is a fabulous painting called Habit de Medecin that, for the life of me, I couldn’t find an image of – this is a pity, as it represented the mid-1500 view of what a physician was comprised of: primarily books.) But even with this shift in thinking and move towards experimentation and direct experience, medicine was still virtually the same in 1700 CE as it was in 200 BCE. The major advancements, and the third period of medical knowledge, didn’t begin until the mid to late 19th century.