Life as an Extreme Sport

Ethics and Materiality

Indeed, there is no body as such; there are only bodies – male or female, black, brown, white, large or small – and the gradations in between. Bodies can be represented or understood not as entities in themselves or simply on a linear continuum with its polar extremes occupied by male and female bodies… but as a field, a two-dimensional continuum in which race (and possibly even class, caste, or religion) form body specifications.
-Elizabeth Grosz

In contrast to the body, embodiment is contextual, enmeshed within the specifics of place, time, physiology, and culture, which together compose enactment. Embodiment never coincides exactly with “the body,” however that normalized concept is understood. Whereas the body is an idealized form that gestures toward a Platonic reality, embodiment is the specific instantiation generated from the noise of difference.
-N. Katherine Hayles

It has occured to me, over the course of reading Hayles’ book How We Became PostHuman: Virtual Bodies in Cybernetics, Literature, and Informatics, that the field of ethics, and specifically of bioethics, is all about realizing the data made flesh. Or, to be less obscure, it’s about realizing that while we’re all individuals, we also are all connected with one another. The arguments about multiple and conflicting autonomies make no sense if you take the modernist concept of each of us being a separate and unattached beings. Likewise, the postmodernist, disembodied concept of self also has very little play, because beneficience (and again, autonomy) is often tied to a physicality that postmodernity prefers to ignore. It’s when we get to this material poiesis, this materiality of data made flesh, that we have a system that acknowledges both the physicality of the body and the connectivity of the, for lack of better word, soul, or self.

contemplations of a final project

Right now I’m thinking along the lines of science fiction and how we’ve gone from the utopia’s of 1960s scifi to the distopia of today. I was originally just gonna talk about scifi distopia, but I think I might be able to weave a narrative about the advances of computing technology and how the advances have changed the popular conception of computers. I think that Gibson via Neuromancer really created the genre of computer-related dystopias… altho for obvious reasons Clarke would have to be the grandfather of* (although I’d have to reread Dick’s Minority Report). While the movie certainly had a computer-generated dystopia, I don’t recall the short story being anything like that. Scifi really morphed from computers as augmentation of humans and allowing the creation of dystopias a la 1984 and Harrison Bergeron to computers as oppositional forces a la Neuromancer and the Matrix.

I got thinking about this because Cerruzi, in his book on the history of computing, asks if we can begin to conceive of a world where computers have negative impact, and says that no one saw the car and thought of smog.  Now, granted, he wrote the book 4 years ago, and only briefly updated it since then, so I have to forgive him The Matrix, et all – but Gibson wrote Neuromancer back in 1983(ish).

*A bit of poking shows that Blade Runner came out in 1982, and Gibson published Neuromancer in 1984. So obviously Dick has some serious paternity of the concept of computer-created, futuristic dystopias, as well. Perhaps a better idea would be taking either Blade Runner or Neuromancer and using them as a starting point to jump into the history of, tracing key concepts back and then forward again – for example, if I were to use Blade Runner, to trace the idea of the cyborg automaton backwards to a convenient beginning, and then forward again to hit up into Dick’s Androids…

Stuff to think about, and I would still need to find a group to integrate with, project and presentation-wise. But it has definite possibility.

Concluding Nostalgia

A running train of thought on the conluding passages of Ceruzzi’s A History of Modern Computing.

Phillip intimates that we’re beyond postmodernism – how does he see it, then? Apple still follows a very modernist conception of business, controlling every aspect of its product and (save for a brief period in the early 1990s) not allowing anyone else to produce finalized hardware models. Or is that the key – that although they control the end product and look very vertical on the surface, they have actually differentiated out and adopted a postmodern strategy of allowing many other companies to make parts that are only assembled into a final Mac-whole at the end of the production cycle?

One could apparently make an arguement for Apple following either vertical or horizaontal market – is this a new post-postmodern world where both models live and function side by side, depending on benefit offered, or have I missed some inherant structural mark that would clearly separate a niche company back to modernist vertical structure or plant it firmly in the distributed network of the postmodern?

Reading this book is a trip down my childhood memory lane. It’s so strange to think there are people out there for whom Gopher is just a mascot, Mosaic is something you do with tiles, and BBS makes no sense whatsoever. People for whom cupular modems, baud, hell – external modems – are just odd and fragmentary parts of the digital past. For me, each ghost brings a fond memory, a smile, and a realization of shaping leading to who I am today.

It’s interesting that Ceruzzi asks if we can anticipate “the dark side of networked digital computing” – after all, isn’t that what William Gibson did with Neuromancer, back at the very advent of the personal computer? Isn’t this prediction of the dark side what an entire genre of storytelling has been based on – Neuromancer, Do Androids Dream of Electric Sheep?, Spider and Jean Robinson’s neural net addiction books, Neal Stephenson, even the entire cyberpunk movement?

One could even argue that writers like H.G. Wells predicted the chaos and conjestion caused by the car and ensuing social changes, but this would admittedly be tenuous connections at best. But those above authors and stories can hardly be dismissed as tenuous – their inspiration directly stems from the advances made in technology and their writing the predictions of horrors that could come from it.

Intelligence Amplification

Douglas Engelbart (noted in the link as having been strongly influenced by Vannevar Bush, which is quite obvious when you read As We May Think and Augmenting Human Intellect back to back) covers a wide range of ideas in his paper Augmenting Human Intelligence. You see Greenblatt’s wonder (If he is a layman, his concept of what provides this sophisticated capability may endow the machine with a mysterious power to sweep information through perceptive and intelligent synthetic devices.), a heavy nod to Bush via predictions of future technology (Tablets, cell phones), a host of turtles running through the paper (If we ask ourselves where that intelligence is embodied, we are forced to concde that it is elusively distributed throughout a hierarchy of functional processes – a hierarchy whose foundation extends down into processes below the depth of our comprehension), and a strong thread of the synergism between science fiction and science fact. But what I really wanted to talk about was this:

However, Korzybski and Whorf (among others) have argued that the language we use affects our thinking to a considerable extend. They say that a lack of words for some types of concepts makes it hard to express those concepts, and thus decreases the likelihood that we will learn much about them. If this is so, then once a language has begun to grow and be used, it would seem reasonable to suspect that the language also affects the evolution of the new concepts to be expressed in that language.

Apparently there are counter-arguments to this: e.g., if a concept needs to be used often but its expression is difficult, then the language will evolve to ease the situation. However, the studies of the past decade into what are called “self-organizing” systems seem toe be revealing that subtle relationships among its interacting elements can significantly influence the course of evolution of such a system. If this is true, and if language us (as it seems to be) a part of a selforganizing system, then it seems probable that the state of a language at a given time strongly affects its own evolution to a succeeding state.

I wish that Engelbart, as well as Koryzbski and Whorf were able to comment on a recent press release by Columbia University Teacher’s College, which found that the Piraha, sn obscure and small Amazonian tribe, has no conception of numbers. Until now, no one has definitively answered Whorf’s basic question of whether or not people in one culture cannot understand a concept from another because they have no words for it. While it’s debatable whether or not the new research puts any nails in theoretical coffins, it seems to strongly indicate that the opposition to Whorf’s hypothesis, that language exerts a force in its evolution; after all, as Engelbart notes, most linguistic changes since Shakespeare’s time have been minor changes where concepts are forced onto existing words, rather than the coining of new words (creating a verb from “google” being a notable exception). Anyhow, I digress.

What is interesting about the Piraha is that they seem to confirm the neo-Whorfian hypothesis proposed by Engelbart, that the language used by a culture and the capability for effective intellectual activity are directly affected during their evolution by the means which individuals control the external manipulation of symbols. It seems that they do prove that our means of externally manipulating symbols influences how we think and the language we use; after all, it’s the Piraha adults that are unable to significantly change their language to grasp a conception of numbers and math – the Piraha children had no problem with either. (Of course, this does lead to a host of potential confounding variables, such as diet’s effect on the developing brain and the ability to learn a language later in life, for of course math is as much a language as any other. It also seems to shed some doubt on the idea that the brain doesn’t code freeze itself at a certain point in life, but I’m digressing again…)

However, as much as the Piraha could cast support towards a neo-Whorfian hypothesis, one must wonder if they instead undermine the entire foundation of Engelbart’s theory. After all, while the Piraha have made steps in Engelbart’s listed historical progression of the development of our intellectual capabilities, they do so in an out of order way that only covers two of the four categories listed. The Piraha have obviously mastered concept manipulation, to abstract ideas and situations, allowing the development of general concepts, and of manual, external symbol manipulation which largely gives graphical representation to symbol manipulation. But they have not grasped the second stage (the others mentioned being stage one and three, respectively); the Piraha seem to lack basic symbol manipulation that lets them differentiate a single sheep from seventy, and they certainly do not have the fourth stage of automated external symbol manipulation, which allows the use of technological devices to rapidly move symbollic data before a users eyes.

Ultimately the Piraha are going to be curiousities for those who study linguistic formation, and won’t have a massive impact on the overriding theoretical conceptions behind Engelbart’s H-LAM/T system; however, when any underlying theoretical foundation is shaken by a new discovery, it’s worth analyzing the overarching concept to see if there are flaws. In this particular case, the Piraha seem to be a side-anomaly that doesn’t take away from Engelbart’s augmenting intelligence idea; after all, with his neo-Whorfian hypothesis he’s created a model of symbol manipulation that is applicable to his particular, Western culture. It’s only when you expand outside the cultural loop Engelbart operates in that his conceptions begin to become problematic.

From Whence Comes Creativity?

Where does creativity come from?

This has been floating in my head the past few weeks, as I read of the desire to augment humans and to remove rote task from our daily lives, leaving us free to be creative and creatively-minded. Many of the early thinkers in computer commuication technology seem to think that if we could just remove that 80% of the time we spend doing paperwork, our creativity would rapidly expand and fill that particular void created by delegating the filing of paper and basic research/fact-checking to some sort of automated, computerized task.

I find this idea troubling, not because I enjoy mindless and repetitive tasks, but because while doing those mindless and repetititve tasks I tend to have the best ideas. There is something about having to do a project on a slight autopilot that seems condusive to creative thought; how many times have you heard someone say that they had the most brilliant idea while driving, slicing onions, or taking a bath? Our brains don’t work in a linear format that allows us to simply say “I’m going to sit down and be a genius, now.” Our brains are scattered and linking objects that jump from subject to idea to dream in a series of hyperlink-style behaviours that makes the internet look like a linear Microsoft Word document in comparison.

I’m not convinced that relegating basic tasks to an automated system would increase creativity as desired by these early architects. In fact, I think that the strength of the apocryphal story of Newton and his apple comes not from it being a “true” story in that it tells what actually happened, but a “true” story in that it tells how we actually think: while sitting around daydreaming, something happened that triggered a train of thought in Newton’s head that led to his particular eureka having found it. To remove the ability to daydream while doing other tasks seems that it would also remove this ability to have random stimuli produce the necessary associations that drive our creativity.