Life as an Extreme Sport

From Whence Comes Creativity?

Where does creativity come from?

This has been floating in my head the past few weeks, as I read of the desire to augment humans and to remove rote task from our daily lives, leaving us free to be creative and creatively-minded. Many of the early thinkers in computer commuication technology seem to think that if we could just remove that 80% of the time we spend doing paperwork, our creativity would rapidly expand and fill that particular void created by delegating the filing of paper and basic research/fact-checking to some sort of automated, computerized task.

I find this idea troubling, not because I enjoy mindless and repetitive tasks, but because while doing those mindless and repetititve tasks I tend to have the best ideas. There is something about having to do a project on a slight autopilot that seems condusive to creative thought; how many times have you heard someone say that they had the most brilliant idea while driving, slicing onions, or taking a bath? Our brains don’t work in a linear format that allows us to simply say “I’m going to sit down and be a genius, now.” Our brains are scattered and linking objects that jump from subject to idea to dream in a series of hyperlink-style behaviours that makes the internet look like a linear Microsoft Word document in comparison.

I’m not convinced that relegating basic tasks to an automated system would increase creativity as desired by these early architects. In fact, I think that the strength of the apocryphal story of Newton and his apple comes not from it being a “true” story in that it tells what actually happened, but a “true” story in that it tells how we actually think: while sitting around daydreaming, something happened that triggered a train of thought in Newton’s head that led to his particular eureka having found it. To remove the ability to daydream while doing other tasks seems that it would also remove this ability to have random stimuli produce the necessary associations that drive our creativity.

Vision Does Not Require Technology

A large part of the charm in Vannevar Bush’s paper As We May Think is reading a 60-odd year old article and identifying the technology he predicted. Polaroid and digital cameras, virtual reality glasses, the TCP/IP protocol, cochlear implants, hard drives and eBook readers are a sample of ideas that could be read and extracted out to what we have today. (For example, take this passage:

Is it not possible that we may learn to introduce them [sounds into the nerve channels of the deaf] without the present cumbersomness of first transforming electrical vibrations to mechanical ones, which the human mechanism promptly turns back to the electrical form? With a couple of electrodes on the skull…

It is an abstract of cochlear implants.)

What really struck me about Bush’s article was not so much the ability to predict technology, (science fiction has done that for years), but that it clarified something that has been floating in the back of my head for a while now: technology is always behind ideas. To really illustrate what I mean, I’m going to switch over to a brief history of the microscope and germ theory.

Glass grinding for lenses reached a crucial point of advancement in the late 17th century, and people were able to take magnifying glasses to the next level, that of microscope. And as soon as people began looking under the microscope, it became clear that smaller things existed. What were these smaller things? Animacules? Were they alive? What did they do? Were there things smaller than the flea, pet of early microscopic viewing? Some people began to speculate on this, and advanced a theory that these smaller than the naked eye animacules were really the cause of disease, instead of internal putrifaction or devils-as-punishment. But although it was possible to see some things, it wasn’t possible to see down to the level of viruses and bacteria. So although the ideas of germ theory and contagion were first proposed in the 1600s, it took another 200 years for the idea to really catch hold and be advanced.

Why 200 years? Because that’s how long it took to advance optics to the point of being able to see viruses and bacteria.

What we see in Vannevar Bush’s article is that ideas are able to be dreamt up long before the technology is actually in place to make the idea real. Much like Star Trek’s communicators laid down the path for cell phones some 40 years later, As We May Think laid the tracks for many different technologies to come. Bush was still limited in his vision by the constraints of his time (imagining that large rooms of women and punchcards would manipulate these mega-machines, for example), but much like those early micrologists who saw the first glimmer of possibility in the microscopic eye, he was able to take the limits of the time and extrapolate out to the possibility of the future.

A New Type of Work, An Old Type of Man

The socialization of the worker to the condition of capitalist production entails the social control of physical and mental power on a very broad basis. Education, training, persuasion, the mobilization of certain social sentiments… and psychological propensities… all play a role and one plainly mixed with the formation of dominant ideologies cultivated by the mass media, religious and educational institutions, and the various arms of the state…

It’s long been a criticism of our public education system that its function is primarily to create highly socialized factory drones. Following a Fordist model, children are raised to be comfortable in warehouse-like settings that accustom the child to working in factory life. Taking direction from the teacher easily translates into taking direction from a foreman, and the indoctrination of school pride (often played out via support for sports teams) can be seen as conditioning to support the company and instill habits of loyalty.

The probelm is, the structure of our public education system was set up during a very modernist period largely dominated by Fordism and scientific management. It was expected, as recently as 30 yeras ago, that you would work for a single corporation your entire life. You socialized with those you worked with, worked together, commuted together, attended church and backyard BBQs together. There was a very specific and distinctly modernist taste to life; your work and life were compartmentalized for maximum functionality. But today, the average person will change careers – not just jobs, but careers – something like seven to twelve times. Being a jack of all trades and master of none is becoming not merely a marketable job skill but the marketable job skill. Company loyalty is out the window – you look after your interests and the company looks after their own. Even IBM lays people off now*.

Our work environment and by extension our social environment has changed, but our educational system has not changed to keep up. There are certainly small attempts here and there – magnet schools, cooperative learning environments – but for the most part children are being educated in an environment that no longer prepares tehm for or even matches the postmodern working world that awaits them.

It was certainly possible to distribute and de-centralize the working world. Will it be possible to do the same to classroom environments so that children are actually being prepared for the world they’ll be lived in, and perhaps more importantantly, what will entail a postmodernist educational system?

Or, largely to humour Phillip – are we beyond consideration of a postmodernist classroom and towards a hyperreal or post-postmodern?

*I actually remember this massive lay-off; several of my parents friends were laid off, and I remember their loyalty extending well beyond working there. They had serious conviction that they would be hired back quickly, as soon as things turned around for the company. Some of them refused to look for work, they were so convinced the lay-off was temporary.

If Postmodernists Are Reacting Against Modernity, Are Fundamentalists Postmodern?

No one exactly agrees as to what is meant by the term, except, perhaps, that ‘postmodernism’ represents some kind of reaction to, or departure from, ‘modernism’.
– David Harvey, The Condition of Postmodernity

If modernity/modernism can be defined as a result of the Enlightenment and characterized by a seriousness of scientificism and rationality (empiricism), then literary critic Terry Eagleton’s definition of a playful and self-ironizing postmodernsim would play very nicely into the idea of postmodernism as a secular reaction to modernity, the flip side to the 1910s-1920s development of fundamentalism as the religious response to modernization. The editors of the PRECIS 6 architectural journal certainly plays into this idea of postmodernism “as a legitimate reaction to the ‘monotony’ of universal modernism’s vision of the world’ (Harvey 9). If modernism is an embodiment of the scientific revolution, “with the belief in linear progress, absolute truths, the rational planning of ideal social orders, and the standardization of knowledge and production” and postmodernism rejects this meta-narrative, could we place the fundamentalism movement (restricted primarily to Christianity for this thought-process, as it’s the one that primarily developed within the sphere of modernity) as a sister-movement to postmodernism? This “neither likes modernity” is also seen in the significant mistrust both ways of thinking have for scientific rationalism. (One actually has to wonder if the rise in postmodern thought in the 1970s and 1980s, combined with a huge upswing in fundamental Christianity is what has caused the growing lack of faith in our society, and the concurrent rise in strong belief in the Weekly World News and other such tabloid fair and bad science.)

Modernity seems to have done away with any real conception of religion/spirituality; Harvey notes on page 34 that during “the inter-war years there was something desperate about the search for a mythology that could somehow straighten society out in such troubled times.” Harvey goes on to talk about Sorel’s conception of inventing myth (first pointed out in 1908, which is interesting, as 1912ish is when you first hear about the ‘return to fundamentals’ movement); this also seems to be a corollary to what the fundamentalists did. They were searching for and inventing myths out of their literal reading of mythos; Karen Armstrong goes into beautiful detail of the concept of mythos and logos and the fracturing of the two that modernity caused in her book The Battle for God: A History of Fundamentalism.

I’m really interested in this idea of postmodernity being the secular sister of fundamentalism; it would have interesting implications, at least academically if not socially. It would also almost remove postmodernity from being postmodernity (or modernity from itself); while you can argue that Fundamentalism with a capital “F” originated at the turn of the 20th century as a reaction to modernity, there are older fundamentalist movements that are also reactionary movements that arise out of some need to protest, separate, or distinguish from the dominant culture of the day. If we can link fundamentalism and postmodernity together, could we go back to earlier eras of noted fundamentalism and find corresponding secular reactionary movements against the dominant paradigm? And if we can do that, does it play into Latour’s assertion that we have never been modern?

This is all very interesting, and bears further musings while I read the remainder of Harvey.

The Teaching Conspiracy

So, as I believe I’ve mentioned, I’m functioning as a teacher’s assistant (technically considered a peer facillitator, as I’m an undergrad) for CHID 390. I’m keeping a running log of my thoughts on this, largely because it’s my first time teaching at the university level, and I want to remember the lessons learned for future years of teaching (of which I forsee quite a bit).

I consider myself so very lucky to have Phillip as the professor I’m working with. He’s incredibly supportive and confident in my ability to do this, which is a great relief, especially as my own faith in being able to do “this” is still, for lack of better word, being earned. I might believe I can do this by the time the quarter is over.

I think the most important lesson I’ve learned from Phillip so far, aside from the incredibly valuable “reading as an extreme sport” lecture, is don’t worry about the details. Don’t stress to death about preparing for class, because in a discussion situation the class is going to veer to areas you didn’t consider when you were planning. If you sit and overplan and worry about the details, you’ll simply make your life miserable by being focused on a single track, and likely share that with the students. Instead, stay loose and flexible. Let the students guide the conversation, because they’re going to talk about what excites them, what confuses them, and what they want to internalize.

This doesn’t mean that the discussion shouldn’t be moderated, or that certain ideas shouldn’t be planned on, only that you need to look for the opportunity in what is being said to bring up these ideas, instead of rigidly forcing the topic. For example, if you think it’s important to talk about gender politics and generalizations in Geertz’ article on the Balinese cockfights, wait until someone mentions something along the lines of “Geertz says that the Balinese do this…” and then ask “do all the Balinese do this?” There are so many hooks in discussions that give you the opportunity to turn the conversation towards the topic you want the class to cover that there is really no reason to do it any other way (at least in a discussion situation – I’m definitely not teaching a lecture class!).

I think another valuable thing Phillip shared with Kanna (my co-conspirator) and I on Monday is to ground yourself before class. Discover your moral center; ask yourself who you are, where do you come from and where do you teach from? Who are you and how do you want to portray yourself? My goal in this is to give the students their best class, one where they feel warm and safe and able to explore ideas that they might be afraid of elsewhere – I want to enable everyone to explore the tendrails and wisps of web that these readings create, but also to improve their ability to write, express themselves, and deeply engage with critical texts.

It’s a noble goal, and a good theory, and very challenging to actually practice. I had seven papers to read, and it took me nearly two hours to simply read through them once, then again to correct for grammar and other English style issues. I found myself paralized when it came to actually giving feedback, and had to talk to Phillip today before feeling comfortable doing so. After all, who am I to make these comments and suggestions, and to determine grade? Talking to Phillip helped significantly, largely because he validated what my concerns were. From that, I actually took away that it’s important to affirm the work the student has done before saying anything constructive – whether it was the most excellent paper in the world or the opposite, this is something that the student has written, and at least at the level I’m teaching at, has put something of themselves in. To not acknowledge that would be risking shutting that person down, turning them off creative and critical analysis, and be doing to someone what I’ve bitched about other instructors doing to me.

That is not my moral center.

…and in more practical thoughts, oh my god, how do professors do this? It took me 20 minutes per paper to offer constructive feedback and commentary, on top of the two hours simply reading. All for seven papers. My respect for teachers and their ability to do has gone up significantly in just this one week of instructing.

The other upshot to doing the commentary at the last minute is that I was quite literally working up until the start of class, and had no time to get worked up or stressed about how teaching was going to go. This was probably for the best.