Why do software projects fail?

penseur

In the Theatetus, Plato writes that ‘philosophy begins in wonder’.  Here is Jowett’s translation of the passage:

Soc: …I believe that you follow me, Theaetetus; for I suspect that you have thought of these questions before now.

Theaet. Yes, Socrates, and I am amazed when I think of them; by the Gods I am! and I want to know what on earth they mean; and there are times when my head quite swims with the contemplation of them.

Soc. I see, my dear Theaetetus, that Theodorus had a true insight into your nature when he said that you were a philosopher, for wonder is the feeling of a philosopher, and philosophy begins in wonder. He was not a bad genealogist who said that Iris (the messenger of heaven) is the child of Thaumas (wonder).

In the Metaphysics, Aristotle continues the same theme (tr. W. D. Ross):

For it is owing to their wonder {ex archês men ta procheira tôn atopôn thaumasantes} that men both now begin and at first began to philosophize; they wondered originally at the obvious difficulties, then advanced little by little and stated difficulties about the greater matters, e.g. about the phenomena of the moon and those of the sun and of the stars, and about the genesis of the universe. And a man who is puzzled and wonders thinks himself ignorant (whence even the lover of myth is in a sense a lover of Wisdom, for the myth is composed of wonders); therefore since they philosophized in order to escape from ignorance, evidently they were pursuing science in order to know, and not for any utilitarian end.

This wonder tradition in philosophy has two principles. One is that wonder must lead to the articulation of questions, for without questions and dialectic, wonder never goes any further.  The second is that questioning must be non-utilitarian, and that its end must be contemplation, rather than the solution to a practical problem; in other words, questions must be open-ended in order to count as philosophical (i.e., pure scientific) problems.

Thus Aristotle continues, in Book II of the Metaphysics:

The investigation of the truth is in one way hard, in another easy. An indication of this is found in the fact that no one is able to attain the truth adequately, while, on the other hand, we do not collectively fail, but every one says something true about the nature of things, and while individually we contribute little or nothing to the truth, by the union of all a considerable amount is amassed. Therefore, since the truth seems to be like the proverbial door, which no one can fail to hit, in this respect it must be easy, but the fact that we can have a whole truth and not the particular part we aim at shows the difficulty of it.

Against this tradition, Descartes, an extremely practical man, argues in Articles 77 and 78 of Passions of the Soul (tr. Stephen H. Voss):

Furthermore, though it is only the dull and stupid who do not have any constitutional inclination toward Wonder {l’admiration}, this is not to say that those who have the most intelligence are always the most inclined to it.  For it is mainly those who, even though they have a good deal of common sense, still do not have a high opinion of their competence [who are most inclined toward wonder].

And though this passion seems to diminish with use, because the more one encounters rare things one wonders at, the more one routinely ceases wondering at them and comes to think that any that may be presented thereafter will be ordinary, nevertheless, when it is excessive and makes one fix one’s attention solely upon the first image of presented objects without acquiring any other knowledge of them, it leaves behind a habit which disposes the soul to dwell in the same way upon all other presented objects, provided they appear the least bit new to it.  This is what prolongs the sickness of the blindly curious — that is, those who investigate rarities only to wonder at them and not to understand them.  For they gradually become so given to wonder that things of no importance are no less capable of engaging them than those whose investigation is more useful.

Descartes found many ways to break with the Aristotelian tradition that had dominated Western thought for over a millennium, but none, I think, more profound than this dismissal of the wonder tradition.  In this brief passage, he places intelligence {l’esprit} above contemplation as the key trait of philosophizing.

A consequence of this is that the nature of philosophical questioning must also change.  In a Cartesian world, questions must have a practical goal.  Difficult problems can be broken down into their components, and if necessary those must be broken down into their components, until we arrive at a series of simple problems that can be solved easily.

Descartes’ position is only strengthened by what is often called the scandal of philosophy: why, given all this time, has philosophy failed to answer the questions it originally set for itself?

  • Does God exist? 
  • Is there life after death? 
  • Do we possess free will?
  • What is happiness, and is it attainable?
  • What is Justice?
  • What is Knowledge?
  • What is Virtue?

Another way to look at the scandal, however, is not as a problem of lacking answers to these questions, but rather as a problem of having an overabundance of answers.  Philosophy, over the centuries, has answered the question of God’s existence with both a yes and a no.  There are five canonical proofs of God’s existence, as well as a multitude of critical analyses of each of these proofs.  We are told that Occam’s Razor, originally a tool of theological discourse, demands that we reject God’s existence.  At the same time, we are told that Occam’s Razor, the principle that simple answers are preferable to complex answers, itself depends on a rational universe for its justification; for the only thing that can guaranty that the universe is simple and comprehensible rather than Byzantine in its complexity, is God Himself.

The scandal of philosophy is itself based on a presupposition: that this overabundance of answers, and the lack of definitive answers, is contrary to the purpose of philosophical questioning.  Yet we know of other traditions in which the lack of answers is indeed a central goal of philosophical questioning.

Zen koans are riddles Zen masters give to their students to help them achieve enlightenment.  Students are expected to meditate on their koans for years, until the koan successfully works its effect on them, bringing them in an intuitive flash into the state of satori

  • Does a dog have Buddha-nature?
  • What is the sound of one hand clapping?
  • What was your original face before you were born?
  • If you meet the Buddha, kill him.

I dabble in the collecting of questions, and with regard to this habit, the observation Descartes makes above about the curious “who investigate rarities only to wonder at them and not to understand them” fits me well.  One of my favorite sets of questions comes from Robert Graves’s The White Goddess, a book which, starting from an analysis of The Romance of Teliesin that makes Frazer’s Golden Bough seem pedestrian by comparison, attempts to unravel the true purpose of poetry and, in the process, answers the following questions:

  • Who cleft the Devil’s foot?
  • What song did the Sirens sing?
  • What name did Achilles use when he hid from the Achaeans in the women’s tent?
  • When did the Fifty Danaids come with their sieves to Britain?
  • What secret was woven into the Gordian Knot?
  • Why did Jehovah create trees and grass before he created the Sun, Moon and stars?
  • Where shall Wisdom be found?

Another set comes from Slavoj Zizek’s Enjoy Your Symptom!, in which Zizek elucidates certain gnomic pronouncements of Jacques Lacan through an analysis of mostly 50’s Hollywood movies:

  • Why does a letter always arrive at its destination?
  • Why is a woman a symptom of man?
  • Why is every act a repetition?
  • Why does the phallus appear?
  • Why are there always two fathers?

Modern life provides its own imponderables:

  • Paper or plastic?
  • Hybrid or Civic?
  • Diet or exercise?
  • Should I invest in my 401K or pay down my college loans?
  • Should I wait to get an HD TV?
  • When shall I pull out of the stock market?

There are no definitive right or wrong answers to these questions.  Rather, how we approach these questions as well as how we respond to them contribute to shaping who we are.  In his short work, Existentialism and Human Emotions, Sartre tells an anecdote about a student who once asked him for advice.  The student wanted to leave home and join the French Resistance.  The reasons were clear to him. The Germans were illegitimate occupiers and it was the duty of every able-bodied Frenchman to fight against them.  At the same time, the student had a sickly mother who required his assistance, and leaving her would not only break her heart, but he might possibly never see her again.  To leave her would entail sacrificing his filial duties, while not to leave her would entail abandoning his moral duty.  To this student, caught in the grips of a mortal quandary,  Sartre offered the unexpected advice: choose!

…[I]n  creating the man that we want to be, there is not a single one of our acts which does not at the same time create an image of man as we think he ought to be.  To choose to be this or that is to affirm at the same time the value of what we choose, because we can never choose evil.  We always choose the good, and nothing can be good for us without being good for all.

But this isn’t the whole truth.  There are also choices we make that appear arbitrary at the time, committed without any thought of ‘man as he ought to be,’ but which turn out to have irreversible consequences upon who we become.  In the film The Goalie’s Anxiety at the Penalty Kick, Wim Wenders follows a goalie who is kicked off of his soccer team after failing to defend against a penalty kick that costs his team the game.  The goalie proceeds aimlessly through a series of pointless and reprehensible acts.  I once asked a soccer-playing friend if the circumstances of the penalty kick are as they were described in the movie, and he said yes.  Before the penalty kick, against a skilled opponent, the goalie has no idea which way the ball will go.  He stands in the middle between the two posts and must choose which in direction he will leap, without enough information to determine whether his choice is the right one or the wrong one.  The only thing he knows is that if he does not leap, he will be certain to fail.

All these theoretical questions, and the survey of the theory of questioning in general, are intended to provide the background necessary for answering the very practical question:  Why do software projects fail?

Fred Brooks’s succinct answer to this is: software projects fail because software projects are very hard to do.  In other words, we are asking the wrong question.  A better way to phrase the problem is “Why are we still surprised when software projects fail?”

This question might be extended to other fields of endeavor:

  • Why are term papers turned in late?
  • Why do we fail to pay our bills on time?
  • Why do we lie when we are asked over the phone, “Were you sleeping?”

In the discipline of software development, it is often found as one item in a larger list of software koans:

  • Why do software projects fail?
  • Why does adding additional developers to a project slow the project down?
  • Why does planning extra time to complete a task result in no additional work being done?
  • Why do developers always underestimate?
  • Why are expectations always too high?
  • Why does no one ever read documentation?
  • Why do we write documentation that no one ever reads?
  • Why is the first release of any code full of bugs?

Here we have an overabundance of questions that can all be answered in the same way.  Kant phrased the answer in this way:

Out of the crooked timber of humanity, no straight thing was ever made.

Aristotle phrases it thus in Metaphysics II:

Perhaps, too, as difficulties are of two kinds, the cause of the present difficulty is not in the facts but in us.

Technical Interview Questions

flogging

Interviewing has been on a my mind, of late, as my company is in the middle of doing quite a bit of hiring.  Technical interviews for software developers are typically an odd affair, performed by technicians who aren’t quite sure of what they are doing upon unsuspecting job candidates who aren’t quite sure of what they are in for.

Part of the difficulty is the gap between hiring managers, who are cognizant of the fact that they are not in position to evaluate the skills of a given candidate, and the in-house developers, who are unsure of what they are supposed to be looking for.  Is the goal of a technical interview to verify that the interviewee has the skills she claims to possess on her resume?  Is it to rate the candidate against some ideal notion of what a software developer ought to be?  Is it to connect with a developer on a personal level, thus assuring through a brief encounter that the candidate is someone one will want to work with for the next several years?  Or is it merely to pass the time, in the middle of more pressing work, in order to have a little sport and give job candidates a hard time?

It would, of course, help if the hiring manager were able to give detailed information about the kind of job that is being filled, the job level, perhaps the pay range — but more often than not, all he has to work with is an authorization to hire “a developer”, and he has been tasked with finding the best that can be got within limiting financial constraints.  So again, the onus is upon the developer-cum-interviewer to determine his own goals for this hiring adventure.

Imagine yourself as the technician who has suddenly been handed a copy of a resume and told that there is a candidate waiting in the meeting room.  As you approach the door of the meeting room, hanging slightly ajar, you consider what you will ask of him.  You gain a few more minutes to think this over as you shake hands with the candidate, exchange pleasantries, apologize for not having had time to review his resume and look blankly down at the sheet of buzzwords and dates on the table before you.

Had you more time to prepare in advance, you might have gone to sites such as Ayenda’s blog, or techinterviews.com, and picked up some good questions to ask.  On the other hand, the value of these questions is debatable, as it may not be clear that these questions are necessarily a good indicator that the interviewee had actually been doing anything at his last job.  He may have been spending his time browsing these very same sites and preparing his answers by rote.  It is also not clear that understanding these high-level concepts will necessarily make the interviewee good in the role he will eventually be placed in, if hired. 

Is understanding how to compile a .NET application with a command line tool necessarily useful in every (or any) real world business development task?  Does knowing how to talk about the observer pattern make him a good candidate for work that does not really involve developing monumental code libraries?  On the other hand, such questions are perhaps a good gauge of the candidate’s level of preparation for the interview, and can be as useful as checking the candidate’s shoes for a good shine to determine how serious he is about the job and what level of commitment he has put into getting ready for it.  And someone who prepares well for an interview will, arguably, also prepare well for his daily job.

You might also have gone to Joel Spolsky’s blog and read The Guerrilla Guide To Interviewing in order to discover that what you are looking for is someone who is smart and gets things done.  Which, come to think of it, is especially helpful if you are looking for superstar developers and have the money to pay them whatever they want.  With such a standard, you can easily distinguish between the people who make the cut and all the other maybe candidates.  On the other hand, in the real world, this may not be an option, and your objective may simply be to distinguish between the better maybe candidates and the less-good maybe candidates.  This task is made all the harder since you are interviewing someone who is already a bit nervous and, maybe, has not even been told, yet, what he will be doing in the job (look through computerjobs.com sometime to see how remarkably vague most job descriptions are) for which he is interviewing.

There are many guidelines available online giving advice on how to identify brilliant developers (but is this really such a difficult task?)  What there is a dearth of is information on how to identify merely good developers — the kind that the rest of us work with on a daily basis and may even be ourselves.  Since this is the real purpose of 99.9% of all technical interviews, to find a merely good candidate, following online advice about how to find great candidates may not be particularly useful, and in fact may even be counter-productive, inspiring a sense of inferiority and persecution in a job candidate that is really undeserved and probably unfair.

Perhaps a better guideline for finding candidates can be found not in how we ought to conduct interviews in an ideal world (with unlimited budgets and unlimited expectations), but in how technical interviews are actually conducted in the real world.  Having done my share of interviewing, watching others interview, and occasionally being interviewed myself, it seems to me that in the wild, technical interviews can be broken down into three distinct categories.

Let me, then, impart my experience, so that you may find the interview technique most appropriate to your needs, if you are on that particular side of the table, or, conversely, so that you may better envision what you are in for, should you happen to be on the other side of the table.  There are three typical styles of technical interviewing which I like to call: 1) Jump Through My Hoops, 2) Guess What I’m Thinking, and 3) Knock This Chip Off My Shoulder.

 

Jump Through My Hoops

tricks

Jump Through My Hoops is, of course, a technique popularized by Microsoft and later adopted by companies such as Google.  In its classical form, it requires an interviewer to throw his Birkenstock shod feet over the interview table and fire away with questions that have nothing remotely to do with programming.  Here are a few examples from the archives.  The questions often involve such mundane objects as manhole covers, toothbrushes and car transmissions, but you should feel free to add to this bestiary more philosophical archetypes such as married bachelors, morning stars and evening stars, Cicero and Tully,  the author of Waverly, and other priceless gems of the analytic school.  The objective, of course, is not to hire a good car mechanic or sanitation worker, but rather to hire someone with the innate skills to be a good car mechanic or sanitation worker should his IT role ever require it.

Over the years, technical interviewers have expanded on the JTMH with tasks such as writing out classes with pencil and paper, answering technical trivia, designing relational databases on a whiteboard, and plotting out a UML diagram with crayons.  In general, the more accessories required to complete this type of interview, the better.

Some variations of JTMH rise to the level of Jump Through My Fiery Hoops.  One version I was involved with required calling the candidate the day before the job interview and telling him to write a complete software application to specification, which would then be picked apart by a team of architects at the interview itself.  It was a bit of overkill for an entry-level position, but we learned what we needed to out of it.  The most famous JTMFH is what Joel Spolsky calls The Impossible Question, which entails asking a question with no correct answer, and requires the interviewer to frown and shake his head whenever the candidate makes any attempt to answer the question.  This particular test is also sometimes called the Kobayashi Maru, and is purportedly a good indicator of how a candidate will perform under pressure.

 

Guess What I’m Thinking

brain

Guess What I’m Thinking, or GWIT, is a more open ended interview technique.  It is often adopted by interviewers who find JTMH a bit too constricting.  The goal in GWIT is to get through an interview with the minimum amount of preparation possible.  It often takes the form, “I’m working on such-and-such a project and have run into such-and-such a problem.  How would you solve it?”  The technique is most effective when the job candidate is given very little information about either the purpose of the project or the nature of the problem.  This establishes for the interviewer a clear standard for a successful interview: if the candidate can solve in a few minutes a problem that the interviewer has been working on for weeks, then she obviously deserves the job.

A variation of GWIT which I have participated in requires showing a candidate a long printout and asking her, “What’s wrong with this code?”  The trick is to give the candidate the impression that there are many right answers to this question, when in fact there is only one, the one the interviewer is thinking of.  As the candidate attempts to triangulate on the problem with hopeful answers such as “This code won’t compile,” “There is a bracket missing here,” “There are no code comments,” and “Is there a page missing?” the interviewer can sagely reply “No, that’s not what I’m looking for,” “That’s not what I’m thinking of, “That’s not what I’m thinking of, either,” “Now you’re really cold” and so on.

This particular test is purportedly a good indicator of how a candidate will perform under pressure.

 

Knock This Chip Off My Shoulder

eveready

KTCOMS is an interviewing style often adopted by interviewers who not only lack the time and desire to prepare for the interview, but do not in fact have any time for the interview itself.  As the job candidate, you start off in a position of wasting the interviewer’s time, and must improve his opinion of you from there.

The interviewer is usually under a lot of pressure when he enters the interview room.  He has been working 80 hours a week to meet an impossible deadline his manager has set for him.  He is emotionally in a state of both intense technical competence over a narrow area, due to his life-less existence for the past few months, as well as great insecurity, as he has not been able to satisfy his management’s demands. 

While this interview technique superficially resembles JTMFH, it is actually quite distinct in that, while JTMFH seeks to match the candidate to abstract notions about what a developer ought to know, KTCOMS is grounded in what the interviewer already knows.  His interview style is, consequently, nothing less that a Nietzschean struggle for self-affirmation.  The interviewee is put in the position of having to prove herself superior to the interviewer or else suffer the consequences.

Should you, as the interviewer, want to prepare for KTCOMS, the best thing to do is to start looking up answers to obscure problems that you have encountered in your recent project, and which no normal developer would ever encounter.  These types of questions, along with an attitude that the job candidate should obviously already know the answers, is sure to fluster the interviewee. 

As the interviewee, your only goal is to submit to the superiority of the interviewer.  “Lie down” as soon as possible.  Should you feel any umbrage, or desire to actually compete with the interviewer on his own turf, you must crush this instinct.  Once you have submitted to the interviewer (in the wild, dogs generally accomplish this by lying down on the floor with their necks exposed, and the alpha male accepts the submissive gesture by laying its paw upon the submissive animal) he will do one of two things;  either he will accept your acquiescence, or he will continue to savage you mercilessly until someone comes in to pull him away.

This particular test is purportedly a good indicator of how a candidate will perform under pressure.

 

Conclusion

moderntimes

I hope you have found this survey of common interviewing techniques helpful.  While I have presented them as distinct styles of interviewing, this should certainly not discourage you from mixing-and-matching them as needed for your particular interview scenario.  The schematism I presented is not intended as prescriptive advice, but merely as a taxonomy of what is already to be found in most IT environments, from which you may draw as you require.  You may, in fact, already be practicing some of these techniques without even realizing it.

Jabberwocky

 

Download SAPISophiaDemo.zip – 2,867.5 KB

 

Following on the tail of the project I have been working on for the past month, a chatterbox (also called a chatbot) with speech recognition and text-to-speech functionality, I came across the following excerpted article in The Economist, available here if you happen to be a subscriber, and here if you are not:

 

Chatbots have already been used by some companies to provide customer support online via typed conversations. Their understanding of natural language is somewhat limited, but they can answer basic queries. Mr Carpenter wants to combine the flexibility of chatbots with the voice-driven “interactive voice-response” systems used in many call centres to create a chatbot that can hold spoken conversations with callers, at least within a limited field of expertise such as car insurance.

This is an ambitious goal, but Mr Carpenter has the right credentials: he is the winner of the two most recent Loebner prizes, awarded in an annual competition in which human judges try to distinguish between other humans and chatbots in a series of typed conversations. His chatbot, called Jabberwacky, has been trained by analysing over 10m typed conversations held online with visitors to its website (see jabberwacky.com). But for a chatbot to pass itself off as a human agent, more than ten times this number of conversations will be needed, says Mr Carpenter. And where better to get a large volume of conversations to analyse than from a call centre?

Mr Carpenter is now working with a large Japanese call-centre company to develop a chatbot operator. Initially he is using transcripts of conversations to train his software, but once it is able to handle queries reliably, he plans to add speech-recognition and speech-synthesis systems to handle the input and output. Since call-centre conversations tend to be about very specific subjects, this is a far less daunting task than creating a system able to hold arbitrary conversations.

 

Jabberwacky is a slightly different beast than the AIML infrastructure I used in my project.  Jabberwacky is a heuristics based technology, whereas AIML is a design-based one that requires somebody to actually anticipate user interactions and try to script them.

All the same, it is a pleasant experience to find that one is serendipidously au courant, when one’s intent was to be merely affably retro.

SophiaBot: What I’ve been working on for the past month…

I have been busy in my basement constructing a robot with which I can have conversations and play games.  Except that the robot is more of a program, and I didn’t build the whole thing up from scratch, but instead cobbled together pieces that other people have created.  I took an Eliza-style interpreter written by Nicholas H.Tollervey (this is the conversation part) along with some scripted dialogs by Dr. Richard S. Wallace and threw it together with a Z-machine program written by Jason Follas, which allows my bot to play old Infocom games like Zork and The Hitchhiker’s Guide to the Galaxy.  I then wrapped these up in a simple workflow and added some new Vista\.NET 3.0 speech recognition and speech synthesis code so the robot can understand me.

I wrote an article about it for CodeProject, a very nice resource that allows developers from around the world to share their code and network.  The site requires registration to download code however, so if you want to play with the demo or look at the source code, you can also download them from this site.

Mr. Tollervey has a succint article about the relationship between chatterboxes and John Searle’s Chinese Box problem, which obviates me from responsibility for discussing the same.

Instead, I’ll just add some quick instructions:

 

The application is made up of a text output screen, a text entry field, and a default enter button. The initial look and feel is that of an IBX XT theme (the first computer I ever played on). This can be changed using voice commands, which I will cover later. There are three menus initially available. The File menu allows the user to save a log of the conversation as a text file. The Select Voice menu allows the user to select from any of the synthetic voices installed on her machine. Vista initially comes with “Anna”. Windows XP comes with “Sam”. Other XP voices are available depending on which versions of Office have been installed over the lifetime of that particular instance of the OS. If the user is running Vista, then the Speech menu will allow him to toggle speech synthesis, dictation, and the context-free grammars. By doing so, the user will have the ability to speak to the application, as well as have the application speak back to him. If the user is running XP, then only speech synthesis is available, since some of the features provided by .NET 3.0 and consumed by this application do not work on XP.

The appearance menu will let you change the look and feel of the text screen.  I’ve also added some pre-made themes at the bottom of the appearnce menu.  If, after chatting with SophiaBot for a while, you want to play a game, just type or say “Play game.”  SophiaBot will present you with a list of the games available (you can add more, actually, simply by dropping additional game files you find on the internet into the Program Files\Imaginative Universal\SophiaBot\Game Data\DATA folder (Jason’s Z-Machine implementation plays games that use version 3 and below of the game engine.  I’m looking (rather lazily) into how to support later versions.  You can go here to download more Zork-type games.  During a game, type or say “Quit” to end your session. “Save” and “Restore” keep track of your current position in the game, so you can come back later and pick up where you left off.

Speech recognition in Vista has two modes: dictation and context-free recognition. Dictation uses context, that is, an analysis of preceding words and words following a given target of speech recognition, in order to determine what word was intended by the speaker. Context-free speech recognition, by way of contrast, uses exact matches and some simple patterns in order to determine if certain words or phrases have been uttered. This makes context-free recognition particularly suited to command and control scenarios, while dictation is particularly suited to situations where we are simply attempting to translate the user’s utterances into text.

You should begin by trying to start up a conversation with Sophia using the textbox, just to see how it works, as well as her limitations as a conversationalist. Sophia uses certain tricks to appear more lifelike. She throws out random typos, for one thing. She also is a bit slower than a computer should really be. This is because one of the things that distinguish computers from people is the way they process information — computers do it quickly, and people do it at a more leisurely pace. By typing slowly, Sophia helps the user maintain his suspension of disbelief. Finally, if a text-to-speech engine is installed on your computer, Sophia reads along as she types out her responses. I’m not certain why this is effective, but it is how computer terminals are shown to communicate in the movies, and it seems to work well here, also. I will go over how this illusion is created below.

In Command\AIML\Game Lexicon mode, the application generates several grammar rules that help direct speech recognition toward certain expected results. Be forewarned: initially loading the AIML grammars takes about two minutes, and occurs in the background. You can continue to touch type conversations with Sophia until the speech recognition engine has finished loading the grammars and speech recognition is available. Using the command grammar, the user can make the computer do the following things: LIST COLORS, LIST GAMES, LIST FONTS, CHANGE FONT TO…, CHANGE FONT COLOR TO…, CHANGE BACKGROUND COLOR TO…. Besides the IBM XT color scheme, a black papyrus font on a linen background also looks very nice. To see a complete list of keywords used by the text-adventure game you have chosen, say “LIST GAME KEYWORDS.” When the game is initially selected, a new set of rules is created based on different two word combinations of the keywords recognized by the game, in order to help speech recognition by narrowing down the total number of phrases it must look for.

In dictation mode, the underlying speech engine simply converts your speech into words and has the core SophiaBot code process it in the same manner that it processes text that is typed in. Dictation mode is sometimes better than context-free mode for non-game speech recognition, depending on how well the speech recognition engine installed on your OS has been trained to understand your speech patterns. Context-free mode is typically better for game mode. Command and control only works in context-free mode.

Speech Recognition And Synthesis Managed APIs in Windows Vista: Part III

Voice command technology, as exemplified in Part II, is probably the most useful and most easy to implement aspect of the Speech Recognition functionality provided by Vista.  In a few days of work, any current application can be enabled to use it, and the potential for streamlining workflow and making it more efficient is truly breathtaking.  The cool factor, of course, is also very high.

Having grown up watching Star Trek reruns, however, I can’t help but feel that the dictation functionality is much more interesting than the voice command functionality.  Computers are meant to be talked to and told what to do, as in that venerable TV series, not cajoled into doing tricks for us based on finger motions over a typewriter.  My long-term goal is to be able to code by talking into my IDE in order to build UML diagrams and then, at a word, turn that into an application.  What a brave new world that will be.  Toward that end, the SR managed API provides the DictationGrammar class.

Whereas the Grammar class works as a gatekeeper, restricting the phrases that get through to the speech recognized handler down to a select set of rules, the DictateGrammar class, by default, kicks out the jams and lets all phrases through to the recognized handler.

In order to make Speechpad a dictation application, we will add the default DicatateGrammar object to the list of grammars used by our speech recognition engine.  We will also add a toggle menu item to turn dictation on and off.  Finally, we will alter the SpeechToAction() method in order to insert any phrases that are not voice commands into the current Speechpad document as text.  Create an local instance of DictateGrammar for our Main form, and then instantiate it in the Main constructor.  Your code should look like this:

	#region Local Members
		
        private SpeechSynthesizer synthesizer = null;
        private string selectedVoice = string.Empty;
        private SpeechRecognitionEngine recognizer = null;
        private DictationGrammar dictationGrammar = null;
        
        #endregion
        
        public Main()
        {
            InitializeComponent();
            synthesizer = new SpeechSynthesizer();
            LoadSelectVoiceMenu();
            recognizer = new SpeechRecognitionEngine();
            InitializeSpeechRecognitionEngine();
            dictationGrammar = new DictationGrammar();
        }
        

Create a new menu item under the Speech menu and label it “Take Dictation“.  Name it takeDictationMenuItem for convenience. Add a handler for the click event of the new menu item, and stub out TurnDictationOn() and TurnDictationOff() methods.  TurnDictationOn() works by loading the local dictationGrammar object into the speech recognition engine. It also needs to turn speech recognition on if it is currently off, since dictation will not work if the speech recognition engine is disabled. TurnDictationOff() simply removes the local dictationGrammar object from the speech recognition engine’s list of grammars.

		
        private void takeDictationMenuItem_Click(object sender, EventArgs e)
        {
            if (this.takeDictationMenuItem.Checked)
            {
                TurnDictationOff();
            }
            else
            {
                TurnDictationOn();
            }
        }

        private void TurnDictationOn()
        {
            if (!speechRecognitionMenuItem.Checked)
            {
                TurnSpeechRecognitionOn();
            }
            recognizer.LoadGrammar(dictationGrammar);
            takeDictationMenuItem.Checked = true;
        }

        private void TurnDictationOff()
        {
            if (dictationGrammar != null)
            {
                recognizer.UnloadGrammar(dictationGrammar);
            }
            takeDictationMenuItem.Checked = false;
        }
        

For an extra touch of elegance, alter the TurnSpeechRecognitionOff() method by adding a line of code to turndictation off when speech recognition is disabled:

        TurnDictationOff();

Finally, we need to update the SpeechToAction() method so it will insert any text that is not a voice command into the current Speechpad document.  Use the default statement of the switch control block to call the InsertText() method of the current document.

        
        private void SpeechToAction(string text)
        {
            TextDocument document = ActiveMdiChild as TextDocument;
            if (document != null)
            {
                DetermineText(text);
                switch (text)
                {
                    case "cut":
                        document.Cut();
                        break;
                    case "copy":
                        document.Copy();
                        break;
                    case "paste":
                        document.Paste();
                        break;
                    case "delete":
                        document.Delete();
                        break;
                    default:
                        document.InsertText(text);
                        break;
                }
            }
        }

        

With that, we complete the speech recognition functionality for Speechpad. Now try it out. Open a new Speechpad document and type “Hello World.”  Turn on speech recognition.  Select “Hello” and say delete.  Turn on dictation.  Say brave new.

This tutorial has demonstrated the essential code required to use speech synthesis, voice commands, and dictation in your .Net 2.0 Vista applications.  It can serve as the basis for building speech recognition tools that take advantage of default as well as custom grammar rules to build adanced application interfaces.  Besides the strange compatibility issues between Vista and Visual Studio, at the moment the greatest hurdle to using the Vista managed speech recognition API is the remarkable dearth of documentation and samples.  This tutorial is intended to help alleviate that problem by providing a hands on introduction to this fascinating technology.

Methodology and Methodism


This is simply a side by side pastiche of the history of Extreme Programming and the history of Methodism.  It is less a commentary or argument than simply an experiment to see if I can format this correctly in HTML.  The history of XP is drawn from Wikipedia.  The history of Methodism is drawn from John Wesley’s A Short History of Methodism.


 








Software development in the 1990s was shaped by two major influences: internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth as competitive business factors. Rapidly-changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development.


The Chrysler Comprehensive Compensation project was started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the persistence layer. They brought in Kent Beck, a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several issues they were having with their development process. He took this opportunity to propose and implement some changes in their practices based on his work with his frequent collaborator, Ward Cunningham.


The first time I was asked to lead a team, I asked them to do a little bit of the things I thought were sensible, like testing and reviews. The second time there was a lot more on the line. I thought, “Damn the torpedoes, at least this will make a good article,” [and] asked the team to crank up all the knobs to 10 on the things I thought were essential and leave out everything else. —Kent Beck


Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a kind of coach to instill the practices as habits in the C3 team. Information about the principles and practices behind XP was disseminated to the wider world through discussions on the original Wiki, Cunningham’s WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hyper-text system map on the XP website at “www.extremeprogramming.org” circa 1999.


Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ISBN 0-201-61641-6), spreading his ideas to a much larger, yet very receptive, audience. Authors in the series went through various aspects attending XP and its practices, even a book critical of the practices. Current state XP created quite a buzz in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins.


Extreme Programming Explained describes Extreme Programming as being:



  • An attempt to reconcile humanity and productivity
  • A mechanism for social change
  • A path to improvement
  • A style of development
  • A software development discipline

The advocates of XP argue that the only truly important product of the system development process is code (a concept to which they give a somewhat broader definition than might be given by others). Without code you have nothing.

Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. Code, say the exponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts.

 The high discipline required by the original practices often went by the wayside, causing certain practices to be deprecated or left undone on individual sites.


Agile development practices have not stood still, and XP is still evolving, assimilating more lessons from experiences in the field. In the second edition of Extreme Programming Explained, Beck added more values and practices and differentiated between primary and corollary practices.


In November, 1729, four young gentlemen of Oxford — Mr. John Wesley, Fellow of Lincoln College; Mr. Charles Wesley, Student of Christ Church; Mr. Morgan, Commoner of Christ Church; and Mr. Kirkham, of Merton College — began to spend some evenings in a week together, in reading, chiefly, the Greek Testament. The next year two or three of Mr. John Wesley’s pupils desired the liberty of meeting with them; and afterwards one of Mr. Charles Wesley’s pupils. It was in 1732, that Mr. Ingham, of Queen’s College, and Mr. Broughton, of Exeter, were added to their number. To these, in April, was joined Mr. Clayton, of Brazen-nose, with two or three of his pupils. About the same time Mr. James Hervey was permitted to meet with them; and in 1735, Mr. Whitefield.


The exact regularity of their lives, as well as studies, occasioned a young gentleman of Christ Church to say, “Here is a new set of Methodists sprung up;” alluding to some ancient Physicians who were so called. The name was new and quaint; so it took immediately, and the Methodists were known all over the University.


They were all zealous members of the Church of England; not only tenacious of all her doctrines, so far as they knew them, but of all her discipline, to the minutest circumstance. They were likewise zealous observers of all the University Statutes, and that for conscience’ sake. But they observed neither these nor anything else any further than they conceived it was bound upon them by their one book, the Bible; it being their one desire and design to be downright Bible-Christians; taking the Bible, as interpreted by the primitive Church and our own, for their whole and sole rule.


The one charge then advanced against them was, that they were “righteous overmuch;” that they were abundantly too scrupulous, and too strict, carrying things to great extremes: In particular, that they laid too much stress upon the Rubrics and Canons of the Church; that they insisted too much on observing the Statutes of the University; and that they took the Scriptures in too strict and literal a sense; so that if they were right, few indeed would be saved.


In October, 1735, Mr. John and Charles Wesley, and Mr. Ingham, left England, with a design to go and preach to the Indians in Georgia: But the rest of the gentlemen continued to meet, till one and another was ordained and left the University. By which means, in about two years’ time, scarce any of them were left.


In February, 1738, Mr. Whitefield went over to Georgia with a design to assist Mr. John Wesley; but Mr. Wesley just then returned to England. Soon after he had a meeting with Messrs, Ingham, Stonehouse, Hall, Hutchings, Kinchin, and a few other Clergymen, who all appeared to be of one heart, as well as of one judgment, resolved to be Bible-Christians at all events; and, wherever they were, to preach with all their might plain, old, Bible Christianity.


They were hitherto perfectly regular in all things, and zealously attached to the Church of England. Meantime, they began to be convinced, that “by grace we are saved through faith;” that justification by faith was the doctrine of the Church, as well as of the Bible. As soon as they believed, they spake; salvation by faith being now their standing topic. Indeed this implied three things: (1) That men are all, by nature, “dead in sin,” and, consequently, “children of wrath.” (2) That they are “justified by faith alone.” (3) That faith produces inward and outward holiness: And these points they insisted on day and night. In a short time they became popular Preachers. The congregations were large wherever they preached. The former name was then revived; and all these gentlemen, with their followers, were entitled Methodists.


In March, 1741, Mr. Whitefield, being returned to England, entirely separated from Mr. Wesley and his friends, because he did not hold the decrees. Here was the first breach, which warm men persuaded Mr. Whitefield to make merely for a difference of opinion. Those, indeed, who believed universal redemption had no desire at all to separate; but those who held particular redemption would not hear of any accommodation, being determined to have no fellowship with men that “were in so dangerous errors.” So there were now two sorts of Methodists, so called; those for particular, and those for general, redemption.

Methodology and its Discontents


A difficult aspect of software programming is that it is always changing.  This is no doubt true in other fields, also, but there is no tenure system in software development.  Whereas in academe, it is quite plausible to reach a certain point in your discipline and then kick the ladder out behind you, as Richard Rorty was at one point accused of doing, this isn’t so in technology.  In technology, it is the young who set the pace, and the old who must keep up.


The rapid changes in technology, with which the weary developer must constantly attempt to stay current, can be broken down into two types: there are advances in tools and advances in methodologies.  While there is always a certain amount of fashion determining which tools get used — for instance which is the better language, VB.Net or C#? –, for the most part development tools are judged based upon their effectiveness. This is not necessarily true concerning the methodologies of software development.  Which is odd since one would suppose that the whole point of a method is that it is a way of guaranteeing certain results — use method A, and B (fame, fortune, happiness) will follow.


There is even something of a backlash against methodism (or, more specifically, certain kinds of methodism) being led, interestingly enough, by tool builders.  For instance, Rocky Lhotka, the creator of the CSLA framework says:



I am a strong believer in responsibility-driven, behavioral, CRC OO design – and that is very compatible with the concepts of Agile. So how can I believe so firmly in organic OO design, and yet find Agile/XP/SCRUM to be so…wrong…?


I think it is because the idea of taking a set of practices developed by very high-functioning people, and cramming them into a Methodology (notice the capital M!), virtually guarantees mediocrity. That, and some of the Agile practices really are just plain silly in many contexts.


Joel Spolsky, the early Microsoft Excel Product Manager and software entrepreneur, has also entered the fray a few times.  Most recently, an blog post by Steve Yegge (a developer at Google) set the technology community on fire:



Up until maybe a year ago, I had a pretty one-dimensional view of so-called “Agile” programming, namely that it’s an idiotic fad-diet of a marketing scam making the rounds as yet another technological virus implanting itself in naive programmers who’ve never read “No Silver Bullet”, the kinds of programmers who buy extended warranties and self-help books and believe their bosses genuinely care about them as people, the kinds of programmers who attend conferences to make friends and who don’t know how to avoid eye contact with leaflet-waving fanatics in airports and who believe writing shit on index cards will suddenly make software development easier.
You know. Chumps.


The general tenor of these skeptical treatments is that Extreme Programming (and its umbrella methodology, Agile) is at best a put-on, and at worst a bit of a cult.  What is lacking in these analyses, however, is an explication of why programming methodologies like Extreme Programming are so appealing.  Software development is a complex endeavor, and its practitioners are always left with a sense that they are being left behind.  Each tool that comes along in order to make development easier always ends up making it actually more difficult and error prone (though also more powerful).  The general trend of software devlopment has consistently been not less complexity, but greater complexity.  This leads to an indemic sense of insecurity among developers, as well as a sense that standing still is tantamount to falling behind.  Developers are, of course, generally well compensated for this burden (unlike, for instance, untenured academics who suffer from the same constraints) so there is no sense in feeling sorry for them, but it should be clear, given this, why there is so much appeal in the notion that there is a secret method which, if they follow it correctly, will grant them admission into the kingdom and relieve them of their anxieties and doubts.


One of the strangest aspects of these methodologies is the notion that methods need evangelists, yet this is what proponents of these concepts self-consciously call themselves.  There are Extreme Programming evangelists, SCRUM evangelists, Rational evangelists, and so on who write books and tour the country giving multimedia presentations, for a price, about why you should be using their methodology and how it will transform you and your software projects. 


So are software methodologies the secular equivalent of evangelical revival meetings?  It would depend, I suppose, on whether the purpose of these methodologies is to make money or to save souls.  Let us hope it is the former, and that lots of people are simply making lots of money by advocating the use of these methodologies.

Metaphors and Software Development

“But the greatest thing by far is to have a mastery of metaphor. This alone cannot be imparted by another; it is the mark of genius, for to make good metaphors implies an eye for resemblances.”  — Aristotle, The Poetics

 



It is a trivial observation that software programmers use a lot of metaphors.  All professions are obliged to invoke metaphors in one way or another, but in software programming I think the program is so extensive that we are not even aware of the fact that creating metaphors is our primary job.  We are only ever immediately aware of the metaphorical character of our profession when we use similes to explain technical matters to our managers.  Such-and-such technology will improve our revenue stream because it is like a new and more reliable conduit for information. It can be combined with sundry-and-other technology, which acts as a locked box for this information.


When speaking with technical colleagues, in turn, we use a different set of metaphors when explaining unfamiliar technology.  We say that “Ajax is like Flash”, “Flash is like instant messaging”, “instant messaging is like talking to services in a SOA”, “SOA is like mainframe programming”, b is like c, c is like d, d is like e, and so on.  Every technology is explained in reference to another technology, which in turn is just a metaphor for something else.  But does explicating one metaphor by simply referencing another metaphor really explain anything?  Is there a final link in the chain that ultimately cascades meaning back up the referential chain?


I have occasionally seen these referential chains referred to as a prison house of language, though perhaps a house of mirrors would be more appropriate in this case.  We work with very complex concepts, and the only way to communicate them is through metaphors, even when, as in a fun house, the only points of reference we have for explaining these concepts are other concepts we find reflected in a reflection of a reflection.  This metaphorical character of our occupation, however, is typically hidden from us because we use a different set of terms to describe what we do.  We don’t typically speak about metaphors in our programming talk; instead we speak of abstractions.


Not only are abstractions explained in terms of other abstractions, but they are also typically abstractions for other abstractions.  In Joel Spolsky’s explication of “Leaky Abstractions” (which is, yes, also a metaphor) we discover that TCP is really an abstraction thrown over IP.  But IP is itself an abstraction of other technologies, which in turn may in themselves also involve further abstractions.  At what point do the abstractions actually end?  Is it when we get to the assembler language level?  Is it when we get to machine language?  Do we finally hit bottom when we reach the point of talking about electricity and circuit boards?


Again, I will posit (in a handwaving and unsubstantiated manner) that the main occupation of a software programmer is working with metaphors.  Taking this as a starting point, it is strange that in the many articles and discussion threads addressing the question, what makes a good programmer?, poetry is never brought up. Overlooking Aristotle’s professed opinion that a gift for metaphor is something that cannot be taught, we might assume that if it is indeed something that can be cultivated, a likely starting point is through the reading and writing of poetry.  It would be a pleasant change if in looking over technical resumes, we also starting looking for signs that prospective employees to BigTech.com also were published in poetry journals or participated in local poetry readings.


But perhaps that is asking for too much.  The only other profession in which metaphors are applied extensively is in politics.  Just as metaphors are called abstractions in programming, in politics they are called “framing”.  Behind the notion of framing in politics is the assumption that certain metaphors are simply more naturally appealing than others.  The proper metaphor will motivate one’s audiences emotions either toward one’s platform or against one’s opponents platform.  The mastery of metaphor in politics, consequently, entails being able to associate one’s own position with the most emotively powerful metaphor into which one can fit one’s position. 


One interesting aspect of the endeavor of framing is that a political metaphor is required to be fitting, that is the metaphor one uses to explain a given position or argument must be appropriate to that argument, and an absence of fitting will generally detract from the force of the metaphor.  That there is this question of fittingness provides two ways to characterize political metaphors.  There are metaphors that seem to naturally apply to a given circumstance, and hence garner for the person who comes up with such metaphors a reputation for vision and articulateness.  Then there are metaphors that are so powerful that it does not matter so much that the circumstance to which it is applied is not so fitting, in which case the person who comes up with such metaphors gains a reputation as a scheming framer.


Determining which is which, of course, generally depends on where one is standing, and in either case we can say that both are masters of metaphor in Aristotle’s sense.  However, because there is so much question about the integrity of metaphors in politics, it is tempting to eschew the whole thing.  As wonderful as metaphors are, politics tends to make everything dirty in the end.


Which leaves software programming as the only place where metaphors can be studied and applied in a disinterested manner.  In programming, the main purpose of our abstractions is not to move people’s emotions, but rather to clarify concepts, spread comprehension and make things work better.  It is the natural home not only for mathematicians, but for poets.

Giulio Camillo, father of the Personal Computer


I am not the first to suggest it, but I will add my voice to those that want to claim that Giulio Camillo built the precursor of the modern personal computer in the 16th century.  Claiming that anyone invented anything is always a precarious venture, and it can be instructive to question the motives of such attempts.  For instance, trying to determine whether Newton or Leibniz invented calculus is a simple question of who most deserves credit for this remarkable achievement. 


Sometimes the question of firsts is intended to reveal something that we did not know before, such as Harold Bloom’s suggestion that Shakespeare invented the idea of personality as we know it.  In making the claim, Bloom at the same time makes us aware of the possibility that personality is not in fact something innate, but something created.  Edmund Husserl turns this notion on its head a bit with his reference in his writings to the Thales of Geometry.  Geometry, unlike the notion of personality, cannot be so easily reduced to an invention, since it is eidetic in nature.  It is always true, whether anyone understands geometry or not.  And so there is a certain irony in holding Thales to be the originator of Geometry since Geometry is a science that was not and could not have been invented as such.  Similarly, each of us, when we discover the truths of geometry for ourselves, becomes in a way a new Thales of Geometry, having made the same observations and realizations for which Thales receives credit. 


Sometimes the recognition of firstness is a way of initiating people into a secret society.  Such, it struck me, was the case when I read as a teenager from Stephen J. Gould that Darwin was not the first person to discover the evolutionary process, but that it was in fact another naturalist named Alfred Russel Wallace, and suddenly a centuries long conspiracy to steal credit from the truly deserving Wallace was revealed to me — or so it had seemed at the time. 


Origins play a strange role in etymological considerations, and when we read Aristotle’s etymological ruminations, there is certainly a sense that the first meaning of a word will somehow provide the key to understanding the concepts signified by the word.  There is a similar intuition at work in the discusions of ‘natural man’ to be found in the political writings of Hobbes, Locke and Rousseau.  For each, the character of the natural man determines the nature of the state, and consequently how we are to understand it best.  For Hobbes, famously, the life of this kind of man is rather unpleasant.  For Locke, the natural man is typified by his rationality.  For Rousseau, by his freedom.  In each case, the character of the ‘natural man’ serves as a sort of gravitational center for understanding man and his works at any time. I have often wondered whether the discussions of the state of the natural man were intended as a scientific deduction or rather merely as a metaphor for each of these great political philosophers.  I lean toward the latter opinion, in which case another way to understand firsts is not as an attempt to achieve historical accuracy, but rather an attempt to find the proper metaphor for something modern.


So who invented the computer?  Was it Charles Babbage with his Difference Engine in the 19th century, or Alan Turing in the 20th with his template for the Universal Machine?  Or was it Ada Lovelace, as some have suggested, the daughter of Lord Byron and collaborator with Charles Babbage who possibly did all the work while Babbage receives all the credit?


My question is a simpler one: who invented the personal computer, Steve Jobs or Giulio Camillo.  I award the laurel to the Camillo, who was known in his own time as the Divine Camillo because of the remarkable nature of his invention.  And in doing so, of course, I merely am attempting to define what the personal computer really is — the gravitational center that is the role of the personal computer in our lives.


Giulio Camillo spent long years working on his Memory Theater, a miniaturized Vitruvian theater still big enough to walk into, basically a box, that would provide the person who stood before it the gift most prized by Renaissance thinkers: the eloquence of Cicero.  The theater itself was arranged with images and figures from greek and roman mythology.  Throughout it were Christian references intermixed with Hermetic and Kabalistic symbols.  In small boxes beneath various statues inside the theater fragments and adaptations of Cicero’s writings could be pulled out and examined.  Through the proper physical arrangment of the fantastic, the mythological, the philosophical and the occult, Camillo sought to provide a way for anyone who stepped before his theater be able to discourse on any subject no less fluently and eloquently than Cicero himself.


Eloquence in the 16th century was understood as not only the ability of the lawyer or statesman to speak persuasively, but also the ability to evoke beautiful and accurate metaphors, the knack for delighting an audience, the ability to instruct, and mastery of diverse subjects that could be brought forth from the memory in order to enlighten one’s listeners.  Already in Camillo’s time, mass production of books was coming into its own and creating a transformation of culture.  Along with it, the ancient arts of memory and of eloquence (by way of analogy we might call it literacy, today), whose paragon was recognized to be Cicero, was in its decline.  Thus Camillo came along at the end of this long tradition of eloquence to invent a box that would capture all that was best of the old world that was quickly disappearing.  He created, in effect, an artificial memory that anyone could use, simply by stepping before it, to invigorate himself with the accumulated eloquence of all previous generations.


And this is how I think of the personal computer.  It is a box, occult in nature, that provides us with an artificial memory to make us better than we are, better than nature made us.  Nature distributes her gifts randomly, while the personal computer corrects that inherent injustice.  The only limitation to the personal computer, as I see it, is that it can only be a repository for all the knowledge humanity has already acquired.  It cannot generate anything new, as such.  It is a library and not a university.


Which is where the internet comes in.  The personal computer, once it is attached to the world wide web, becomes invigorated by the chaos and serendipity that is the internet.  Not only do we have the dangerous chaos of viruses and trojan horses, but also the positive chaos of online discussions, the putting on of masks and mixing with the online personas of others, the random following of links across the internet that ultimately leads us to make new connections between disparate concepts in ways that seem natural and inevitable.


This leads me to the final connection I want to make in my overburdened analogy.  Just as the personal computer is not merely a box, but also a doorway to the internet, so Giulio Camillo’s Theater of Memory was tied to a Neoplatonic worldview in which the idols of the theater, if arranged properly and fittingly, could draw down the influences of the intelligences, divine beings known variously as the planets (Mars, Venus, etc.), the Sephiroth, or the Archangels.  By standing before Camillo’s box, the spectator was immediately plugged into these forces, the consequences of which are difficult to assess.  There is danger, but also much wonder, to be found on the internet.