Life as an Extreme Sport

Concluding Nostalgia

A running train of thought on the conluding passages of Ceruzzi’s A History of Modern Computing.

Phillip intimates that we’re beyond postmodernism – how does he see it, then? Apple still follows a very modernist conception of business, controlling every aspect of its product and (save for a brief period in the early 1990s) not allowing anyone else to produce finalized hardware models. Or is that the key – that although they control the end product and look very vertical on the surface, they have actually differentiated out and adopted a postmodern strategy of allowing many other companies to make parts that are only assembled into a final Mac-whole at the end of the production cycle?

One could apparently make an arguement for Apple following either vertical or horizaontal market – is this a new post-postmodern world where both models live and function side by side, depending on benefit offered, or have I missed some inherant structural mark that would clearly separate a niche company back to modernist vertical structure or plant it firmly in the distributed network of the postmodern?

Reading this book is a trip down my childhood memory lane. It’s so strange to think there are people out there for whom Gopher is just a mascot, Mosaic is something you do with tiles, and BBS makes no sense whatsoever. People for whom cupular modems, baud, hell – external modems – are just odd and fragmentary parts of the digital past. For me, each ghost brings a fond memory, a smile, and a realization of shaping leading to who I am today.

It’s interesting that Ceruzzi asks if we can anticipate “the dark side of networked digital computing” – after all, isn’t that what William Gibson did with Neuromancer, back at the very advent of the personal computer? Isn’t this prediction of the dark side what an entire genre of storytelling has been based on – Neuromancer, Do Androids Dream of Electric Sheep?, Spider and Jean Robinson’s neural net addiction books, Neal Stephenson, even the entire cyberpunk movement?

One could even argue that writers like H.G. Wells predicted the chaos and conjestion caused by the car and ensuing social changes, but this would admittedly be tenuous connections at best. But those above authors and stories can hardly be dismissed as tenuous – their inspiration directly stems from the advances made in technology and their writing the predictions of horrors that could come from it.

Intelligence Amplification

Douglas Engelbart (noted in the link as having been strongly influenced by Vannevar Bush, which is quite obvious when you read As We May Think and Augmenting Human Intellect back to back) covers a wide range of ideas in his paper Augmenting Human Intelligence. You see Greenblatt’s wonder (If he is a layman, his concept of what provides this sophisticated capability may endow the machine with a mysterious power to sweep information through perceptive and intelligent synthetic devices.), a heavy nod to Bush via predictions of future technology (Tablets, cell phones), a host of turtles running through the paper (If we ask ourselves where that intelligence is embodied, we are forced to concde that it is elusively distributed throughout a hierarchy of functional processes – a hierarchy whose foundation extends down into processes below the depth of our comprehension), and a strong thread of the synergism between science fiction and science fact. But what I really wanted to talk about was this:

However, Korzybski and Whorf (among others) have argued that the language we use affects our thinking to a considerable extend. They say that a lack of words for some types of concepts makes it hard to express those concepts, and thus decreases the likelihood that we will learn much about them. If this is so, then once a language has begun to grow and be used, it would seem reasonable to suspect that the language also affects the evolution of the new concepts to be expressed in that language.

Apparently there are counter-arguments to this: e.g., if a concept needs to be used often but its expression is difficult, then the language will evolve to ease the situation. However, the studies of the past decade into what are called “self-organizing” systems seem toe be revealing that subtle relationships among its interacting elements can significantly influence the course of evolution of such a system. If this is true, and if language us (as it seems to be) a part of a selforganizing system, then it seems probable that the state of a language at a given time strongly affects its own evolution to a succeeding state.

I wish that Engelbart, as well as Koryzbski and Whorf were able to comment on a recent press release by Columbia University Teacher’s College, which found that the Piraha, sn obscure and small Amazonian tribe, has no conception of numbers. Until now, no one has definitively answered Whorf’s basic question of whether or not people in one culture cannot understand a concept from another because they have no words for it. While it’s debatable whether or not the new research puts any nails in theoretical coffins, it seems to strongly indicate that the opposition to Whorf’s hypothesis, that language exerts a force in its evolution; after all, as Engelbart notes, most linguistic changes since Shakespeare’s time have been minor changes where concepts are forced onto existing words, rather than the coining of new words (creating a verb from “google” being a notable exception). Anyhow, I digress.

What is interesting about the Piraha is that they seem to confirm the neo-Whorfian hypothesis proposed by Engelbart, that the language used by a culture and the capability for effective intellectual activity are directly affected during their evolution by the means which individuals control the external manipulation of symbols. It seems that they do prove that our means of externally manipulating symbols influences how we think and the language we use; after all, it’s the Piraha adults that are unable to significantly change their language to grasp a conception of numbers and math – the Piraha children had no problem with either. (Of course, this does lead to a host of potential confounding variables, such as diet’s effect on the developing brain and the ability to learn a language later in life, for of course math is as much a language as any other. It also seems to shed some doubt on the idea that the brain doesn’t code freeze itself at a certain point in life, but I’m digressing again…)

However, as much as the Piraha could cast support towards a neo-Whorfian hypothesis, one must wonder if they instead undermine the entire foundation of Engelbart’s theory. After all, while the Piraha have made steps in Engelbart’s listed historical progression of the development of our intellectual capabilities, they do so in an out of order way that only covers two of the four categories listed. The Piraha have obviously mastered concept manipulation, to abstract ideas and situations, allowing the development of general concepts, and of manual, external symbol manipulation which largely gives graphical representation to symbol manipulation. But they have not grasped the second stage (the others mentioned being stage one and three, respectively); the Piraha seem to lack basic symbol manipulation that lets them differentiate a single sheep from seventy, and they certainly do not have the fourth stage of automated external symbol manipulation, which allows the use of technological devices to rapidly move symbollic data before a users eyes.

Ultimately the Piraha are going to be curiousities for those who study linguistic formation, and won’t have a massive impact on the overriding theoretical conceptions behind Engelbart’s H-LAM/T system; however, when any underlying theoretical foundation is shaken by a new discovery, it’s worth analyzing the overarching concept to see if there are flaws. In this particular case, the Piraha seem to be a side-anomaly that doesn’t take away from Engelbart’s augmenting intelligence idea; after all, with his neo-Whorfian hypothesis he’s created a model of symbol manipulation that is applicable to his particular, Western culture. It’s only when you expand outside the cultural loop Engelbart operates in that his conceptions begin to become problematic.

historicise *this*

I’m reading Ivo Kamps’ article New Historicising the New Historicism in preparation for class this afternoon; Kamps is basically deconstructing new historicism through the filter of the ever present year of 1968 and the Vietnam war. It’s an interesting take and criticism of both Greenblatt and the field of new historicism, and offers some good points for me to lecture on. At one point while reading, I came across a quote from Greenblatt that sums up why so many people avoid new historicism, literary theory, and CHID:

Anecdotes are the equivalents in the register of the real of what drew me to the study of literature: the encounter with something I could not stand not understanding, that I could not quite finish with or finish off, that I had to get out of my inner life where it had taken hold

I had typed this bit out to both Jen and Michael before realizing that yes, actually, that is it. It’s the encounter with something I can’t stand not understanding, that gets inside and takes hold and nags and nudges that I have to struggle with and contemplate and revisit and mull – that’s why I do this, instead of any of the numerous, easier routes I could take. It’s the challenge of understanding that is the lure, the dare, the taunt that keeps me engaged.

From Whence Comes Creativity?

Where does creativity come from?

This has been floating in my head the past few weeks, as I read of the desire to augment humans and to remove rote task from our daily lives, leaving us free to be creative and creatively-minded. Many of the early thinkers in computer commuication technology seem to think that if we could just remove that 80% of the time we spend doing paperwork, our creativity would rapidly expand and fill that particular void created by delegating the filing of paper and basic research/fact-checking to some sort of automated, computerized task.

I find this idea troubling, not because I enjoy mindless and repetitive tasks, but because while doing those mindless and repetititve tasks I tend to have the best ideas. There is something about having to do a project on a slight autopilot that seems condusive to creative thought; how many times have you heard someone say that they had the most brilliant idea while driving, slicing onions, or taking a bath? Our brains don’t work in a linear format that allows us to simply say “I’m going to sit down and be a genius, now.” Our brains are scattered and linking objects that jump from subject to idea to dream in a series of hyperlink-style behaviours that makes the internet look like a linear Microsoft Word document in comparison.

I’m not convinced that relegating basic tasks to an automated system would increase creativity as desired by these early architects. In fact, I think that the strength of the apocryphal story of Newton and his apple comes not from it being a “true” story in that it tells what actually happened, but a “true” story in that it tells how we actually think: while sitting around daydreaming, something happened that triggered a train of thought in Newton’s head that led to his particular eureka having found it. To remove the ability to daydream while doing other tasks seems that it would also remove this ability to have random stimuli produce the necessary associations that drive our creativity.

Vision Does Not Require Technology

A large part of the charm in Vannevar Bush’s paper As We May Think is reading a 60-odd year old article and identifying the technology he predicted. Polaroid and digital cameras, virtual reality glasses, the TCP/IP protocol, cochlear implants, hard drives and eBook readers are a sample of ideas that could be read and extracted out to what we have today. (For example, take this passage:

Is it not possible that we may learn to introduce them [sounds into the nerve channels of the deaf] without the present cumbersomness of first transforming electrical vibrations to mechanical ones, which the human mechanism promptly turns back to the electrical form? With a couple of electrodes on the skull…

It is an abstract of cochlear implants.)

What really struck me about Bush’s article was not so much the ability to predict technology, (science fiction has done that for years), but that it clarified something that has been floating in the back of my head for a while now: technology is always behind ideas. To really illustrate what I mean, I’m going to switch over to a brief history of the microscope and germ theory.

Glass grinding for lenses reached a crucial point of advancement in the late 17th century, and people were able to take magnifying glasses to the next level, that of microscope. And as soon as people began looking under the microscope, it became clear that smaller things existed. What were these smaller things? Animacules? Were they alive? What did they do? Were there things smaller than the flea, pet of early microscopic viewing? Some people began to speculate on this, and advanced a theory that these smaller than the naked eye animacules were really the cause of disease, instead of internal putrifaction or devils-as-punishment. But although it was possible to see some things, it wasn’t possible to see down to the level of viruses and bacteria. So although the ideas of germ theory and contagion were first proposed in the 1600s, it took another 200 years for the idea to really catch hold and be advanced.

Why 200 years? Because that’s how long it took to advance optics to the point of being able to see viruses and bacteria.

What we see in Vannevar Bush’s article is that ideas are able to be dreamt up long before the technology is actually in place to make the idea real. Much like Star Trek’s communicators laid down the path for cell phones some 40 years later, As We May Think laid the tracks for many different technologies to come. Bush was still limited in his vision by the constraints of his time (imagining that large rooms of women and punchcards would manipulate these mega-machines, for example), but much like those early micrologists who saw the first glimmer of possibility in the microscopic eye, he was able to take the limits of the time and extrapolate out to the possibility of the future.