Do Computers Read Electric Books?

In the comments section of a blog I like to frequent, I have been pointed to an article in the International Herald about Pierre Bayard’s new book,  How to Talk About Books You Haven’t Read.

Bayard recommends strategies such as abstractly praising the book, offering silent empathy regarding someone else’s love for the book, discussing other books related to the book in question, and finally simply talking about oneself.  Additionally, one can usually glean enough information from reviews, book jackets and gossip to sustain the discussion for quite a while.

Students, he noted from experience, are skilled at opining about books they have not read, building on elements he may have provided them in a lecture. This approach can also work in the more exposed arena of social gatherings: the book’s cover, reviews and other public reaction to it, gossip about the author and even the ongoing conversation can all provide food for sounding informed.

I’ve recently been looking through some AI experiments built on language scripts, based on the 1966 software program Eliza, which used a small script of canned questions to maintain a conversation with computer users.  You can play a web version of Eliza here, if you wish.  It should be pointed out that the principles behind Eliza are the same as those that underpin the famous Turing Test.  Turing proposed answering the question can machines think by staging an ongoing experiment to see if machines can imitate thinking.  The proposal was made in his 1950 paper Computing Machinery and Intelligence:

The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be:

“My hair is shingled, and the longest strands are about nine inches long.”

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The standard form of the current Turing experiments is something called a chatterbox application.  Chatterboxes abstract the mechanism for generating dialog from the dialog scripts themselves by utilizing a set of rules written in a common format.  The most popular format happens to be an XML standard called AIML (Artificial Intelligence Markup Language).

What I’m interested in, at the moment, is not so much whether I can write a script that will fool people into thinking they are talking with a real person, but rather whether I can write a script that makes small talk by discussing the latest book.  If I can do this, it should validate Pierre Bayard’s proposal, if not Alan Turing’s.

Speech Recognition And Synthesis Managed APIs in Windows Vista: Part III

Voice command technology, as exemplified in Part II, is probably the most useful and most easy to implement aspect of the Speech Recognition functionality provided by Vista.  In a few days of work, any current application can be enabled to use it, and the potential for streamlining workflow and making it more efficient is truly breathtaking.  The cool factor, of course, is also very high.

Having grown up watching Star Trek reruns, however, I can’t help but feel that the dictation functionality is much more interesting than the voice command functionality.  Computers are meant to be talked to and told what to do, as in that venerable TV series, not cajoled into doing tricks for us based on finger motions over a typewriter.  My long-term goal is to be able to code by talking into my IDE in order to build UML diagrams and then, at a word, turn that into an application.  What a brave new world that will be.  Toward that end, the SR managed API provides the DictationGrammar class.

Whereas the Grammar class works as a gatekeeper, restricting the phrases that get through to the speech recognized handler down to a select set of rules, the DictateGrammar class, by default, kicks out the jams and lets all phrases through to the recognized handler.

In order to make Speechpad a dictation application, we will add the default DicatateGrammar object to the list of grammars used by our speech recognition engine.  We will also add a toggle menu item to turn dictation on and off.  Finally, we will alter the SpeechToAction() method in order to insert any phrases that are not voice commands into the current Speechpad document as text.  Create an local instance of DictateGrammar for our Main form, and then instantiate it in the Main constructor.  Your code should look like this:

	#region Local Members
		
        private SpeechSynthesizer synthesizer = null;
        private string selectedVoice = string.Empty;
        private SpeechRecognitionEngine recognizer = null;
        private DictationGrammar dictationGrammar = null;
        
        #endregion
        
        public Main()
        {
            InitializeComponent();
            synthesizer = new SpeechSynthesizer();
            LoadSelectVoiceMenu();
            recognizer = new SpeechRecognitionEngine();
            InitializeSpeechRecognitionEngine();
            dictationGrammar = new DictationGrammar();
        }
        

Create a new menu item under the Speech menu and label it “Take Dictation“.  Name it takeDictationMenuItem for convenience. Add a handler for the click event of the new menu item, and stub out TurnDictationOn() and TurnDictationOff() methods.  TurnDictationOn() works by loading the local dictationGrammar object into the speech recognition engine. It also needs to turn speech recognition on if it is currently off, since dictation will not work if the speech recognition engine is disabled. TurnDictationOff() simply removes the local dictationGrammar object from the speech recognition engine’s list of grammars.

		
        private void takeDictationMenuItem_Click(object sender, EventArgs e)
        {
            if (this.takeDictationMenuItem.Checked)
            {
                TurnDictationOff();
            }
            else
            {
                TurnDictationOn();
            }
        }

        private void TurnDictationOn()
        {
            if (!speechRecognitionMenuItem.Checked)
            {
                TurnSpeechRecognitionOn();
            }
            recognizer.LoadGrammar(dictationGrammar);
            takeDictationMenuItem.Checked = true;
        }

        private void TurnDictationOff()
        {
            if (dictationGrammar != null)
            {
                recognizer.UnloadGrammar(dictationGrammar);
            }
            takeDictationMenuItem.Checked = false;
        }
        

For an extra touch of elegance, alter the TurnSpeechRecognitionOff() method by adding a line of code to turndictation off when speech recognition is disabled:

        TurnDictationOff();

Finally, we need to update the SpeechToAction() method so it will insert any text that is not a voice command into the current Speechpad document.  Use the default statement of the switch control block to call the InsertText() method of the current document.

        
        private void SpeechToAction(string text)
        {
            TextDocument document = ActiveMdiChild as TextDocument;
            if (document != null)
            {
                DetermineText(text);
                switch (text)
                {
                    case "cut":
                        document.Cut();
                        break;
                    case "copy":
                        document.Copy();
                        break;
                    case "paste":
                        document.Paste();
                        break;
                    case "delete":
                        document.Delete();
                        break;
                    default:
                        document.InsertText(text);
                        break;
                }
            }
        }

        

With that, we complete the speech recognition functionality for Speechpad. Now try it out. Open a new Speechpad document and type “Hello World.”  Turn on speech recognition.  Select “Hello” and say delete.  Turn on dictation.  Say brave new.

This tutorial has demonstrated the essential code required to use speech synthesis, voice commands, and dictation in your .Net 2.0 Vista applications.  It can serve as the basis for building speech recognition tools that take advantage of default as well as custom grammar rules to build adanced application interfaces.  Besides the strange compatibility issues between Vista and Visual Studio, at the moment the greatest hurdle to using the Vista managed speech recognition API is the remarkable dearth of documentation and samples.  This tutorial is intended to help alleviate that problem by providing a hands on introduction to this fascinating technology.

Speech Recognition And Synthesis Managed APIs In Windows Vista: Part II


Playing with the speech synthesizer is a lot of fun for about five minutes (ten if you have both Microsoft Anna and Microsoft Lila to work with)  — but after typing “Hello World” into your Speechpad document for the umpteenth time, you may want to do something a bit more challenging.  If you do, then it is time to plug in your expensive microphone, since speech recognition really works best with a good expensive microphone.  If you don’t have one, however, then go ahead and plug in a cheap microphone.  My cheap microphone seems to work fine.  If you don’t have a cheap microphone, either, I have heard that you can take a speaker and plug it into the mic jack of your computer, and if that doesn’t cause an explosion, you can try talking into it.


While speech synthesis may be useful for certain specialized applications, voice commands, by cantrast, are a feature that can be used to enrich any current WinForms application. With the SR Managed API, it is also easy to implement once you understand certain concepts such as the Grammar class and the SpeechRecognitionEngine.


We will begin by declaring a local instance of the speech engine and initializing it. 

	#region Local Members

private SpeechSynthesizer synthesizer = null;
private string selectedVoice = string.Empty;
private SpeechRecognitionEngine recognizer = null;

#endregion

public Main()
{
InitializeComponent();
synthesizer = new SpeechSynthesizer();
LoadSelectVoiceMenu();
recognizer = new SpeechRecognitionEngine();
InitializeSpeechRecognitionEngine();
}

private void InitializeSpeechRecognitionEngine()
{
recognizer.SetInputToDefaultAudioDevice();
Grammar customGrammar = CreateCustomGrammar();
recognizer.UnloadAllGrammars();
recognizer.LoadGrammar(customGrammar);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(recognizer_SpeechHypothesized);
}

private Grammar CreateCustomGrammar()
{
GrammarBuilder grammarBuilder = new GrammarBuilder();
grammarBuilder.Append(new Choices(“cut”, “copy”, “paste”, “delete”));
return new Grammar(grammarBuilder);
}


The speech recognition engine is the main workhorse of the speech recognition functionality.  At one end, we configure the input device that the engine will listen on.  In this case, we use the default device (whatever you have plugged in), though we can also select other inputs, such as specific wave files.  At the other end, we capture two events thrown by our speech recognition engine.  As the engine attempts to interpret the incoming sound stream, it will throw various “hypotheses” about what it thinks is the correct rendering of the speech input.  When it finally determines the correct value, and matches it to a value in the associated grammar objects, it throws a speech recognized event, rather than a speech hypothesized event.  If the determined word or phrase does not have a match in any associated grammar, a speech recognition rejected event (which we do not use in the present project) will be thrown instead.


In between, we set up rules to determine which words and phrases will throw a speech recognized event by configuring a Grammar object and associating it with our instance of the speech recognition engine.  In the sample code above, we configure a very simple rule which states that a speech recognized event will be thrown if any of the following words: “cut“, “copy“, “paste“, and “delete“, is uttered.  Note that we use a GrammarBuilder class to construct our custom grammar, and that the syntax of the GrammarBuilder class closely resembles the syntax of the StringBuilder class.


This is the basic code for enabling voice commands for a WinForms application.  We will now enhance the Speechpad application by adding a menu item to turn speech recognition on and off,  a status bar so we can watch as the speech recognition engine interprets our words, and a function that will determine what action to take if one of our key words is captured by the engine.


Add a new menu item labeled “Speech Recognition” under the “Speech” menu item, below “Read Selected Text” and “Read Document”.  For convenience, name it speechRecognitionMenuItem.  Add a handler to the new menu item, and use the following code to turn speech recognition on and off, as well as toggle the speech recognition menu item.  Besides the RecognizeAsync() method that we use here, it is also possible to start the engine synchronously or, by passing it a RecognizeMode.Single parameter, cause the engine to stop after the first phrase it recognizes. The method we use to stop the engine, RecognizeAsyncStop(), is basically a polite way to stop the engine, since it will wait for the engine to finish any phrases it is currently processing before quitting. An impolite method, RecognizeAsyncCancel(), is also available — to be used in emergency situations, perhaps.

        private void speechRecognitionMenuItem_Click(object sender, EventArgs e)
{
if (this.speechRecognitionMenuItem.Checked)
{
TurnSpeechRecognitionOff();
}
else
{
TurnSpeechRecognitionOn();
}
}

private void TurnSpeechRecognitionOn()
{
recognizer.RecognizeAsync(RecognizeMode.Multiple);
this.speechRecognitionMenuItem.Checked = true;
}

private void TurnSpeechRecognitionOff()
{
if (recognizer != null)
{
recognizer.RecognizeAsyncStop();
this.speechRecognitionMenuItem.Checked = false;
}
}


We are actually going to use the RecognizeAsyncCancel() method now, since there is an emergency situation. The speech synthesizer, it turns out, cannot operate if the speech recognizer is still running. To get around this, we will need to disable the speech recognizer at the last possible moment, and then reactivate it once the synthesizer has completed its tasks. We will modify the ReadAloud() method to handle this.


private void ReadAloud(string speakText)
{
try
{
SetVoice();
recognizer.RecognizeAsyncCancel();
synthesizer.Speak(speakText);
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

}

The user now has the ability to turn speech recognition on and off. We can make the application more interesting by capturing the speech hypothesize event and displaying the results to a status bar on the Main form.  Add a StatusStrip control to the Main form, and a ToolStripStatusLabel to the StatusStrip with its Spring property set to true.  For convenience, call this label toolStripStatusLabel1.  Use the following code to handle the speech hypothesized event and display the results:

private void recognizer_SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
{
GuessText(e.Result.Text);
}

private void GuessText(string guess)
{
toolStripStatusLabel1.Text = guess;
this.toolStripStatusLabel1.ForeColor = Color.DarkSalmon;
}


Now that we can turn speech recognition on and off, as well as capture misinterpretations of the input stream, it is time to capture the speech recognized event and do something with it.  The SpeechToAction() method will evaluate the recognized text and then call the appropriate method in the child form (these methods are accessible because we scoped them internal in the Textpad code above).  In addition, we display the recognized text in the status bar, just as we did with hypothesized text, but in a different color in order to distinguish the two events.


private void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
string text = e.Result.Text;
SpeechToAction(text);
}

private void SpeechToAction(string text)
{
TextDocument document = ActiveMdiChild as TextDocument;
if (document != null)
{
DetermineText(text);

switch (text)
{
case “cut”:
document.Cut();
break;
case “copy”:
document.Copy();
break;
case “paste”:
document.Paste();
break;
case “delete”:
document.Delete();
break;
}
}
}

private void DetermineText(string text)
{
this.toolStripStatusLabel1.Text = text;
this.toolStripStatusLabel1.ForeColor = Color.SteelBlue;
}


Now let’s take Speechpad for a spin.  Fire up the application and, if it compiles, create a new document.  Type “Hello world.”  So far, so good.  Turn on speech recognition by selecting the Speech Recognition item under the Speech menu.  Highlight “Hello” and say the following phrase into your expensive microphone, inexpensive microphone, or speaker: delete.  Now type “Save the cheerleader, save the”.  Not bad at all.

Speech Recognition And Synthesis Managed APIs In Windows Vista: Part I




VistaSpeechAPIDemo.zip – 45.7 Kb


VistaSpeechAPISource.zip – 405 Kb


Introduction


One of the coolest features to be introduced with Windows Vista is the new built in speech recognition facility.  To be fair, it has been there in previous versions of Windows, but not in the useful form in which it is now available.  Best of all, Microsoft provides a managed API with which developers can start digging into this rich technology.  For a fuller explanation of the underlying technology, I highly recommend the Microsoft whitepaper. This tutorial will walk the user through building a common text pad application, which we will then trick out with a speech synthesizer and a speech recognizer using the .Net managed API wrapper for SAPI 5.3. By the end of this tutorial, you will have a working application that reads your text back to you, obeys your voice commands, and takes dictation. But first, a word of caution: this code will only work for Visual Studio 2005 installed on Windows Vista. It does not work on XP, even with .NET 3.0 installed.

Background


Because Windows Vista has only recently been released, there are, as of this writing, several extant problems relating to developing on the platform.  The biggest hurdle is that there are known compatibility problems between Visual Studio and Vista.  Visual Studio.NET 2003 is not supported on Vista, and there are currently no plans to resolve any compatibility issues there.  Visual Studio 2005 is supported,  but in order to get it working well, you will need to make sure you also install service pack 1 for Visual Studio 2005.  After this, you will also need to install a beta update for Vista called, somewhat confusingly, “Visual Studio 2005 Service Pack 1 Update for Windows Vista Beta”.  Even after doing all this, you will find that all the new cool assemblies that come with Vista, such as the System.Speech assembly, still do not show up in your Add References dialog in Visual Studio.  If you want to have them show up, you will finally need to add a registry entry indicating where the Vista dll’s are to be found.  Open the Vista registry UI by running regedit.exe in your Vista search bar.  Add the following registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders\v3.0 Assemblies with this value: C:\\Program Files\\Reference Assemblies\\Microsoft\\Framework\\v3.0. (You can also install it under HKEY_CURRENT_USER, if you prefer.)  Now, we are ready to start programming in Windows Vista.

Before working with the speech recognition and synthesis functionality, we need to prepare the ground with a decent text pad application to which we will add on our cool new toys. Since this does not involve Vista, you do not really have to follow through this step in order to learn the speech recognition API.  If you already have a good base application, you can skip ahead to the next section, Speechpad, and use the code there to trick out your app.  If you do not have a suitable application at hand, but also have no interest in walking through the construction of a text pad application, you can just unzip the source code linked above and pull out the included Textpad project.  The source code contains two Visual Studio 2005 projects, the Textpad project, which is the base application for the SR functionality, and Speechpad, which includes the final code.


All the same, for those with the time to do so, I feel there is much to gain from building an application from the ground up. The best way to learn a new technology is to use it oneself and to get one’s hands dirty, as it were, since knowledge is always more than simply knowing that something is possible; it also involves knowing how to put that knowledge to work. We know by doing, or as Giambattista Vico put it, verum et factum convertuntur.


Textpad


Textpad is an MDI application containing two forms: a container, called Main.cs, and a child form, called TextDocument.csTextDocument.cs, in turn, contains a RichTextBox control.


Create a new project called Textpad.  Add the “Main” and “TextDocument” forms to your project.  Set the IsMdiContainer property of Main to true.  Add a MainMenu control and an OpenFileDialog control (name it “openFileDialog1”) to Main.  Set the Filter property of the OpenFileDialog to “Text Files | *.txt”, since we will only be working with text files in this project.  Add a RichTextBox control to “TextDocument”, name it “richTextBox1”; set its Dock property to “Fill” and its Modifiers property to “Internal”.


Add a MenuItem control to MainMenu called “File” by clicking on the MainMenu control in Designer mode and typing “File” where the control prompts you to “type here”.  Set the File item’s MergeType property to “MergeItems”. Add a second MenuItem called “Window“.  Under the “File” menu item, add three more Items: “New“, “Open“, and “Exit“.  Set the MergeOrder property of the “Exit” control to 2.  When we start building the “TextDocument” form, these merge properties will allow us to insert menu items from child forms between “Open” and “Exit”.


Set the MDIList property of the Window menu item to true.  This automatically allows it to keep track of your various child documents during runtime.


Next, we need some operations that will be triggered off by our menu commands.  The NewMDIChild() function will create a new instance of the Document object that is also a child of the Main container.  OpenFile() uses the OpenFileDialog control to retrieve the path to a text file selected by the user.  OpenFile() uses a StreamReader to extract the text of the file (make sure you add a using declaration for System.IO at the top of your form). It then calls an overloaded version of NewMDIChild() that takes the file name and displays it as the current document name, and then injects the text from the source file into the RichTextBox control in the current Document object.  The Exit() method closes our Main form.  Add handlers for the File menu items (by double clicking on them) and then have each handler call the appropriate operation: NewMDIChild(), OpenFile(), or Exit().  That takes care of your Main form.

        #region Main File Operations

private void NewMDIChild()
{
NewMDIChild(“Untitled”);
}

private void NewMDIChild(string filename)
{
TextDocument newMDIChild = new TextDocument();
newMDIChild.MdiParent = this;
newMDIChild.Text = filename;
newMDIChild.WindowState = FormWindowState.Maximized;
newMDIChild.Show();
}

private void OpenFile()
{
try
{
openFileDialog1.FileName = “”;
DialogResult dr = openFileDialog1.ShowDialog();
if (dr == DialogResult.Cancel)
{
return;
}
string fileName = openFileDialog1.FileName;
using (StreamReader sr = new StreamReader(fileName))
{
string text = sr.ReadToEnd();
NewMDIChild(fileName, text);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void NewMDIChild(string filename, string text)
{
NewMDIChild(filename);
LoadTextToActiveDocument(text);
}

private void LoadTextToActiveDocument(string text)
{
TextDocument doc = (TextDocument)ActiveMdiChild;
doc.richTextBox1.Text = text;
}

private void Exit()
{
Dispose();
}

#endregion


To the TextDocument form, add a SaveFileDialog control, a MainMenu control, and a ContextMenuStrip control (set the ContextMenuStrip property of richTextBox1 to this new ContextMenuStrip).  Set the SaveFileDialog’s defaultExt property to “txt” and its Filter property to “Text File | *.txt”.  Add “Cut”, “Copy”, “Paste”, and “Delete” items to your ContextMenuStrip.  Add a “File” menu item to your MainMenu, and then “Save“, Save As“, and “Close” menu items to the “File” menu item.  Set the MergeType for “File” to “MergeItems”. Set the MergeType properties of “Save”, “Save As” and “Close” to “Add”, and their MergeOrder properties to 1.  This creates a nice effect in which the File menu of the child MDI form merges with the parent File menu.


The following methods will be called by the handlers for each of these menu items: Save(), SaveAs(), CloseDocument(), Cut(), Copy(), Paste(), Delete(), and InsertText(). Please note that the last five methods are scoped as internal, so they can be called by the parent form. This will be particularly important as we move on to the Speechpad project.


#region Document File Operations

private void SaveAs(string fileName)
{
try
{
saveFileDialog1.FileName = fileName;
DialogResult dr = saveFileDialog1.ShowDialog();
if (dr == DialogResult.Cancel)
{
return;
}
string saveFileName = saveFileDialog1.FileName;
Save(saveFileName);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void SaveAs()
{
string fileName = this.Text;
SaveAs(fileName);
}

internal void Save()
{
string fileName = this.Text;
Save(fileName);
}

private void Save(string fileName)
{
string text = this.richTextBox1.Text;
Save(fileName, text);
}

private void Save(string fileName, string text)
{
try
{
using (StreamWriter sw = new StreamWriter(fileName, false))
{
sw.Write(text);
sw.Flush();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void CloseDocument()
{
Dispose();
}

internal void Paste()
{
try
{
IDataObject data = Clipboard.GetDataObject();
if (data.GetDataPresent(DataFormats.Text))
{
InsertText(data.GetData(DataFormats.Text).ToString());
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

internal void InsertText(string text)
{
RichTextBox theBox = richTextBox1;
theBox.SelectedText = text;
}

internal void Copy()
{
try
{
RichTextBox theBox = richTextBox1;
Clipboard.Clear();
Clipboard.SetDataObject(theBox.SelectedText);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

internal void Cut()
{
Copy();
Delete();
}

internal void Delete()
{
richTextBox1.SelectedText = string.Empty;
}

#endregion


Once you hook up your menu item event handlers to the methods listed above, you should have a rather nice text pad application. With our base prepared, we are now in a position to start building some SR features.


Speechpad


Add a reference to the System.Speech assembly to your project.  You should be able to find it in C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\.  Add using declarations for System.Speech, System.Speech.Recognition, and System.Speech.Synthesis to your Main form. The top of your Main.cs file should now look something like this:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.IO;
using System.Speech;
using System.Speech.Synthesis;
using System.Speech.Recognition;

In design view, add two new menu item to the main menu in your Main form labeled “Select Voice” and “Speech“.  For easy reference, name the first item selectVoiceMenuItem.  We will use the “Select Voice” menu to programmatically list the synthetic voices that are available for reading Speechpad documents.  To programmatically list out all the synthetic voices, use the following three methods found in the code sample below.  LoadSelectVoiceMenu() loops through all voices that are installed on the operating system and creates a new menu item for each.  VoiceMenuItem_Click() is simply a handler that passes the click event on to the SelectVoice() method. SelectVoice() handles the toggling of the voices we have added to the “Select Voice” menu.  Whenever a voice is selected, all others are deselected.  If all voices are deselected, then we default to the first one.


Now that we have gotten this far, I should mention that all this trouble is a little silly if there is only one synthetic voice available, as there is when you first install Vista. Her name is Microsoft Anna, by the way. If you have Vista Ultimate or Vista Enterprise, you can use the Vista Updater to download an additional voice, named Microsoft Lila, which is contained in the Simple Chinese MUI.  She has a bit of an accent, but I am coming to find it rather charming.  If you don’t have one of the high-end flavors of Vista, however, you might consider leaving the voice selection code out of your project.


private void LoadSelectVoiceMenu()
{
foreach (InstalledVoice voice in synthesizer.GetInstalledVoices())
{
MenuItem voiceMenuItem = new MenuItem(voice.VoiceInfo.Name);
voiceMenuItem.RadioCheck = true;
voiceMenuItem.Click += new EventHandler(voiceMenuItem_Click);
this.selectVoiceMenuItem.MenuItems.Add(voiceMenuItem);
}
if (this.selectVoiceMenuItem.MenuItems.Count > 0)
{
this.selectVoiceMenuItem.MenuItems[0].Checked = true;
selectedVoice = this.selectVoiceMenuItem.MenuItems[0].Text;
}
}

private void voiceMenuItem_Click(object sender, EventArgs e)
{
SelectVoice(sender);
}

private void SelectVoice(object sender)
{
MenuItem mi = sender as MenuItem;
if (mi != null)
{
//toggle checked value
mi.Checked = !mi.Checked;

if (mi.Checked)
{
//set selectedVoice variable
selectedVoice = mi.Text;
//clear all other checked items
foreach (MenuItem voiceMi in this.selectVoiceMenuItem.MenuItems)
{
if (!voiceMi.Equals(mi))
{
voiceMi.Checked = false;
}
}
}
else
{
//if deselecting, make first value checked,
//so there is always a default value
this.selectVoiceMenuItem.MenuItems[0].Checked = true;
}
}
}


We have not declared the selectedVoice class level variable yet (your Intellisense may have complained about it), so the next step is to do just that.  While we are at it, we will also declare a private instance of the System.Speech.Synthesis.SpeechSynthesizer class and initialize it, along with a call to the LoadSelectVoiceMenu() method from above, in your constructor:


#region Local Members

private SpeechSynthesizer synthesizer = null;
private string selectedVoice = string.Empty;

#endregion

public Main()
{
InitializeComponent();
synthesizer = new SpeechSynthesizer();
LoadSelectVoiceMenu();
}


To allow the user to utilize the speech synthesizer, we will add two new menu items under the “Speech” menu labeled “Read Selected Text” and “Read Document“.  In truth, there isn’t really much to using the Vista speech synthesizer.  All we do is pass a text string to our local SpeechSynthesizer object and let the operating system do the rest.  Hook up event handlers for the click events of these two menu items to the following methods and you will be up and running with an SR enabled application:


#region Speech Synthesizer Commands

private void ReadSelectedText()
{
TextDocument doc = ActiveMdiChild as TextDocument;
if (doc != null)
{
RichTextBox textBox = doc.richTextBox1;
if (textBox != null)
{
string speakText = textBox.SelectedText;
ReadAloud(speakText);
}
}
}

private void ReadDocument()
{
TextDocument doc = ActiveMdiChild as TextDocument;
if (doc != null)
{
RichTextBox textBox = doc.richTextBox1;
if (textBox != null)
{
string speakText = textBox.Text;
ReadAloud(speakText);
}
}
}

private void ReadAloud(string speakText)
{
try
{
SetVoice();
synthesizer.Speak(speakText);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

}

private void SetVoice()
{
try
{
synthesizer.SelectVoice(selectedVoice);
}
catch (Exception)
{
MessageBox.Show(selectedVoice + “\” is not available.);
}
}

#endregion

Performative Sleep

I was busy writing away in a notebook last night when I suddenly realized that I was sleeping.  In the dream, I had been working on a commentary on Montaigne’s essay Of Cannibals.  The problem with discussing dreams, of course, is that it ofen leads one into an embarrassing consideration of one’s own inner life which tends to be self-stroking and not particularly revealing — or rather, it reveals some self-absorbed aspects of one’s own personality even as one is sure that it is revealing some great inner-truth, like John Marcher’s beast in the jungle.  Or even worse, it is like taking one of those IQ tests that occassionally pop-up in one’s browser and determining from it that one is quite intelligent.

When I woke up in the morning, there was no commentary written out at my bedside, or even prepared in my head.  I don’t even remember what I was trying to say about Of Cannibals.  This contrasts starkly with the experience of sleep coding, which occassionally overcomes software developers who have been working too hard on a particular problem, and in some circles is even considered to be a mark of particularly virtuous coding.

I am convinced, as many programmers are, that sleep coding really works — that is, that in sleep, programmers actually solve problems from their waking hours.  I have often spent hours on a particularly insidious problem only to find, after a good night’s sleep, that I am able to quickly fix the problem the next morning.  And it seems to be something different from simply taking a break.  Walking away from a problem for a few hours, while it can be helpful in reducing stress, has never had the revelatory effect that sleeping has had.  I’ve even come to the point that when I consider a problem to be particularly difficult, instead of trying to solve the problem right away, I plan on learning as much as I can about it in order to hand it over to my dream-coder to solve in the night.  I think of this as an occult offshoring.

It is quite possible, of course, that the experience of dreaming code and the phenomenon of having code solved in one’s sleep are two entirely different things.  The second can be true even if the first is essentially meaningless, a phantom caused by neurons misfiring in our sleep.  And this is, in some cases, the solution to the mind-body problem.  The theory goes that when awake, we are merely observers of a mechanistic process with epiphenomenal experiences that do not actually affect what is going on in the world.  We are merely passive observers, even though we think we are actually participating and making decisions — though this implies a rather inelegant duplication of entities in the world.  Why should it be that I can code in my sleep, and also observe myself coding in my sleep through my dreams, but these two things are not the same thing?

And also, is this something that only happens for computer programmers?  Is it the case that our bodies can perform high cognitive functions without us, but only for certain types of tasks?  I don’t recall ever waking up in college with an essay fully formed in my mind.  Then again, we are told that Coleridge woke up from a dream with Kubla Kahn fully formed in his mind, and only after being interrupted by a visitor and taking a break from it, did he lose it again.

Concerning Ladders

It is a commonplace that humor resists translation.  This was Pevear and Volokhonsky’s conceit when they came out with a new translation of The Brothers Karamazov in 1990, which they claimed finally brought across (successfully, I think) the deep humor of Dostoevsky’s masterpiece.  While accuracy is the goal in most translation efforts, to hold to accuracy when translating humor unavoidably leaves much untranslated.  Thus in translating Lewis Caroll into Russian, Vladimir Nabokov chose to replace English wordplay with completely different puns sensible only to the Russian speaker, all the better to capture the flavor of Caroll’s humor.


A former colleague at the infamous Turner Studios has posted the following joke to his blog, which I must admit I cannot decipher:



 


 Yet I know it is a joke, because he adds the following gloss to the image:


Let the hilarity ensue. Someone put up a site where you can build your own World of Warcraft talent trees. The priest one made me laugh, since it’s twoo, it’s twoo.


What is one required to know in order to decipher this particular joke?  Initially, of course, one must know that this is an artifact of the online game World of Warcraft, which is a complex virtual world people pay a monthly fee in order to gain access to.  Next, the artifact is a “talent tree”, which describes different abilities people can gain through accruing time in the virtual world.  The various talents form a tree in the sense that one must gain lower level talents before one may achieve the higher talents, and while the low level talents form a broad base, there are fewer high level talents to choose from when one gets to the top.  The choice of talents one chooses to acquire, in turn, determines what sort of person one is in the virtual World of Warcraft.


This is the formal aspect of the talent tree.  In order to understand the hilarity of this particular talent tree, however, one must further understand the pictorial vocabulary used to represent talents in this tree, a task requiring a Rosetta stone of sorts.  The use of pictures to tell stories is old, and certainly predates any written languages, with exemplars such as the cave drawings at Lascaux.  Long after the advent of written languages, images continued to exert a central role in the telling of stories and the transmission of culture in societies where the majority of people were illiterate.  It was even the main way that Christianity promulgated its teachings to the masses, and the eventual eclipse of the central role of images in religious life by way of the Protestant Reformation can be seen as a direct result of the  emphasis placed on reading the Bible for oneself, and hence the importance of literacy.


Beyond pratfalls and scatology, I’m not sure that pictures without words are a particularly effective means of transmitting humor.  The talent tree for the priest represented above has less to do with cave paintings at Lascaux than with the Renaissance emblem book tradition, which does attempt to treat images as language, and reached its height of artistic expression with the HYPNEROTOMACHIA POLIPHILI.  The traditional emblem book was made up of a series of 100 or so images that were explicated with poems and allegories.  What sets them apart from instructive religious images is that they require a high level of literacy in order to read and enjoy, whereas religious images during the same period were particularly useful for the illiterate.  In some cases, due to the expense of printing woodcuts, emblem books would even forgo actual images and instead would include mere descriptions of the emblems being explicated.  Implicit in all of this, however, was the understanding that whatever could be said about the emblems was originally and overabundantly expressed in the images themselves, and that the accompanying text merely offered a glimpse into their hidden meanings.


Athanasius Kircher, the 17th century polymath, pursued a similar approach toward deciphering Egyptian hieroglyphs.  An interesting website dedicated to and to some extent influenced by his work can be found here.  Following the work of 19th century Egyptologists like Jean-François Champollion, we know today that Egyptian hieroglyphs alternately represent either phonetic elements or words, depending on how they are used.  In the case of cartouches, the series of symbols often found on monuments and usually placed in an oval  in order to set them apart, hieroglyphs were exclusively a phonetic alphabet used to spell out the personal names of Egyptian dignitaries.  For Kircher, however, they represented a language of images which, if not actually magical, were at least possessed of superabundant and secret meaning.  Kircher sought transcendence in his efforts to cull meaning from cartouches.  How far he fell short can be gathered from this gloss by Umberto Eco in The Search For The Perfect Language:


>Out of this passion for the occult came those attempts at decipherment which now amuse Egyptologists.  On page 557 of his Obeliscus Pamphylius, figures 20-4 reproduce the images of a cartouche to which Kircher gives the following reading: ‘the originator of all fecundity and vegetation is Osiris whose generative power bears from heaven to his kingdom the Sacred Mophtha.’  This same image was deciphered by Champollion (Lettre a Dacier, 29), who used Kircher’s own reporductions, as ‘AOTKRTA (Autocrat or Emperor) sun of the son and sovereign of the crown, Caesar Domitian Augustus)’.  The difference is, to say the least, notable, especially as regards the mysterious Mophtha, figured as a lion, over which Kircher expended pages and pages of mystic exegesis listing its numerous properties, while for Champollion the lion simply stands for the Greek letter lambda.


 


 Whereas Kircher’s search for transcendence requires great learning, the icon to the right, of the Ladder of Divine Ascent, is accessible to the unliterate.  Most icons in the Eastern Orthodox tradition are of saints, and are used in prayer to the saints.  Icons that depict stories, such as this icon, are somewhat rare, though there is evidence that this was in fact the prior tradition, and the earliest Christian images, found in the catacombs of Rome, typically depict stories from the Bible.  The icon of the Ladder of Divine Ascent is based on the ladder described by John Climacus in the 7th century book of the same name, and the Orthodox saint can in fact be found at the lower left corner of the image.  Climacus, in turn, borrowed his ladder from the image of a ladder that Jacob dreamed about, a ladder extending from earth to heaven.  In this icon, Christ stands at the top of the ladder, welcoming anyone who can make the full ascent.  At the bottom are monks lining up to attempt the climb, while in between we see ascetics being diverted, distracted, and pulled off of the ladder by demonic beings.  The message is fairly straight forward.  Transcendence and salvation are possible, but very difficult.  The ladder represents the journey, but also the mediation required to ascend from the cthonic to the celestial.


I find the metaphor of the ladder striking because 1) it is man-made, and 2) it is something that one steps off of when one reaches the top.  These two features explain why the talent tree depicted above could never be a talent ladder, even though both are things that one climbs.  The tree is something made of the same earthly material that it grows out of.  It reaches for the sky, but because it is not truly a mediator, it cannot allow one to step off of it, and in fact the higher one climbs, the less stable one’s purchase is.  Just as a pier is a disappointed bridge, as James Joyce indicated, a tree is a disappointed ladder.  It goes nowhere.


This, I take it, is the humor inherent in the talent tree above.  The talent tree provides a semblance of movement upwards, but ultimately disappoints.  It always provides more, but the more turns out to be more of the same.  For an interesting unpacking of this phenomenon, one could do worse than read this cautionary blog about the dangers of playing World of Warcraft:


>60 levels, 30+ epics, a few really good “real life” friends, a seat on the oldest and largest guild on our server’s council, 70+ days “/played,” and one “real” year later…

It took a huge personal toll on me. To illustrate the impact it had, let’s look at me one year later. When I started playing, I was working towards getting into the best shape of my life (and making good progress, too). Now a year later, I’m about 30 pounds heavier that I was back then, and it is not muscle. I had a lot of hobbies including DJing (which I was pretty accomplished at) and music as well as writing and martial arts. I haven’t touched a record or my guitar for over a year and I think if I tried any Kung Fu my gut would throw my back out. Finally, and most significantly, I had a very satisfying social life before.

These changes are miniscule, however, compared to what has happened in quite a few other people’s lives. Some background… Blizzard created a game that you simply can not win. Not only that, the only way to “get better” is to play more and more. In order to progress, you have to farm your little heart out in one way or another: either weeks at a time PvPing to make your rank or weeks at a time getting materials for and “conquering” raid instances, or dungeons where you get “epic loot” (pixilated things that increase your abilities, therefore making you “better”). And what do you do after these mighty dungeons fall before you and your friend’s wrath? Go back the next week (not sooner, Blizzard made sure you can only raid the best instances once a week) and do it again (imagine if Alexander the Great had to push across the Middle East every damn week).

 


The burden of Sisyphus is a perennial staple of humorists, and not a tragedy at all.  Consider the most famous Laurel and Hardy short, The Music Box, in which the conceit of the whole film is the two bunglers trying to move a piano to a house on top of a hill.  Perhaps the most iconic example of this sort of humor is Nigel’s amplifier from This Is Spinal Tap, which “goes to eleven”.  For Nigel, eleven is a transcendent level of amplification, while for the mock interviewer, it is just one more number.  Why not just re-calibrate the amplifier and make ten eleven?  Nigel believes that eleven transforms the amplifier into a ladder, whereas the audience recognizes that it is just a tree.


I am at a point in my life where I see trees and ladders everywhere.  For instance, the constant philosophical debates around the mind-body problem can be broken down into a simple question about whether consciousness is a tree or a ladder.  If consciousness is the complex accumulation of basically simple brain processes, then it is a tree.  If aggregating various physical processes never can achieve true consciousness, then consciousness is a ladder.  And then from these two basic theses, we can arrive at all the other combinations of mind-body solutions, for instance that it is a tree that thinks it is a ladder, or a ladder that thinks it is a tree, or that ladder and tree are simply two equivalent modes of describing the same phenomenon, depending possibly on whether one is in fact a tree or a ladder.


Science fiction plots, in turn, can be broken down into two types: those in which ladders pretend to be trees, and those in which trees pretend to be ladders.  Virtual worlds, finally, are the culmination of a historical weariness over these problems, and a consequent ambivalence about whether trees and ladders make any difference, anymore.  For those who have chosen to forgo the search for ladders, virtual worlds provide a world of trees, which simulate the experience of climbing ladders — virtual ladders, so to speak.


Having had several years of success, Blizzard, the makers of World of Warcraft, have recently released a new expansion to their online world called The Burning Crusade.  Whereas up to this point, players have been limited to a maximum level of 60, those who buy The Burning Crusade will have that ceiling lifted.  With The Burning Crusade, World of Warcraft goes to level 70.

ASP.NET AJAX 1.0 Released

It was over a year ago that I started working with a product called Microsoft Atlas.  I wanted to use it to build a rich web client for managing licensed commercial music for a cable television studio, based on the expectation that the client must work on multiple platforms (hence a web application) and at the same time provide rich functionality such as our sponsors were used to in their desktop applications (hence Ajax).  The whole time I was building it, I had the expectation that the final release was right around the corner.  In December of 2005, Atlas was rumored to have a release date sometime in the March or April range.  A year, a name change, and a few scope changes later, it has finally arrived.

The product is an example of Microsoft coming late to the party.  Based on a key bit of technology originally developed by Microsoft engineers over seven years ago, the XMLHttpRequest API, alternate vendors like Yahoo, Google and others helped to develop a style of programming called Ajax that allowed web clients to talk to a webserver without a page refresh.  Ajax in turn was adopted by advocates of the term Web 2.0 as one of the hallmarks of the phenomena they wished to tout in an attempt to revitalize interest in the web as a business platform following the disaster that we now all know as The IT Bubble of the 90’s.

Why Microsoft took so long to get around to it is an open question.  Very likely, they were busy getting on the web services band wagon as a way to promote Smart Clients as their technology of choice for integrating the desktop with the web.  While very cool in its own right, it hasn’t really achieved the same mindshare that Ajax has among web developers, and so — better late than never — we now have ASP.NET Ajax to kick around, and it can be downloaded here.

In the meantime, special recognition should be given, I think, to Brent Ashley, who in 2000 came out with something he called JavaScript Remote Scripting, which used javascript to generate dynamic iFrames in order to provide the same functionality that the XMLHttpRequest API does.  In an alternate universe, JSRS could have been the inspiration for Web 2.0.  Brent Ashley still supports his scripts here, a placeholder for his mark on the history of technology.

Secret Societies and the Internet


While driving with my family to visit an old friend the other day I caught a bit of Harry Schearer’s  wrap-up of the year 2006 on Weekend Edition.  During the interview Schearer was asked what happened with the 2006 Democratic election victory, and Schearer said yeah, what happened?  What must be going through the minds of all the people who believed that the elections were stolen in 2000 and again in 2004, the people who can point out the series of miniscule irregularities that cumulatively disenfranchised the American people of their right to vote those two previous times?  Did evil take a holiday in 2006?


Conspiracy theories are, of course, the opiate of the masses, but what happens when they are real?  And what must be happening when they disappear?  The most truly worthy conspiracies do not only control the mechanisms of power, but also the perception of power, and in doing so undermine the very Enlightenment notion that truth will set us free, since the conspirators control our perception of the truth. They are everything we like to accuse post-modernists and deconstructionists of being with one difference — they are effective.  Any conspiracy worthy of being treated as a conspiracy, then, cannot simply disappear anymore than it can make itself appear.  Everything we know about conspiracies are, a priori, false, managed, and inauthentic.  An elaborate cover story.


In the very awareness that there is falsity in the world, however, one also becomes aware that there is something being hidden from us, and behind it, eventually, truth. Or as Descartes said in The Meditations :


…hence it follows, not only that what is cannot be produced by what is not, but likewise that the more perfect, in other words, that which contains in itself more reality, cannot be the effect of the less perfect….


One assumes, perhaps erroneously, that those who feed us lies must therefore possess the Truth.  Here, then, is the dilemma for truth-seekers.  What if knowing the truth entails speaking falsehoods to the rest of the world?  We would like it not to be so, but what if the truth is so striking, so peculiar, so melancholic that the truth-seeker, despite herself, will ultimately  be obliged to be mendacious once they are brought before the Truth itself, if only to protect others from what she has come to know?  And if this were not the case, then wouldn’t someone have explained the Truth to us long ago?


One solution is to step back into a sort of pragmatic stance, and judge the pursuit of conspiracy theories ultimately to be delusional in nature.  But — and here’s the rub — doesn’t this go against the evidence we have that conspiracies do in fact occur.  Worse, isn’t this the sort of delusion, isn’t this the sort of lie, that prevents people from trying to unmask these conspiracies in the first place?  Or as Baudelaire informed us,


Mes chers frères, n’oubliez jamais, quand vous entendrez vanter le progrès des lumières, que la plus belle des ruses du diable est de vous persuader qu’il n’existe pas!


If the conspiratorial nature of the world cannot be revealed as a truth, then it must first be revealed as a falsehood.  This is how Leo Strauss, one of the architects of modern neoconservatism, put it in his short but revealing essay, Persecution and the Art of Writing.  Because not everyone is morally, inetellectually, or constitutionally prepared to receive the Truth, truth should only ever be alluded to.  Hints should be dropped, intentional errors and contradictions presented, which will lead the astute and prepared student to ask the correct questions that will eventually initiate him into the company of the elite.  The obvious question rarely raised, then, is whether Strauss ever got the students he felt he deserved.  Or were the allusions too obscure, and the paradoxes too knotted for anyone to follow him along the royal path? 


Worse yet, could Pynchon’s suggestion from The Crying of Lot 49 be correct, and the pursuers of conspiracy in the end are the ones who make conspiracies come to life, taking on the task of hiding the truth that no one initially gave to them, protecting a truth that in the end does not exist?


Yet this denies what we all know in our hearts to be true.  Conspiracies do exist, though not always in the form we imagine them to.  Take, for instance, the recent excerpts in the poetry journal Exquiste Corpse from Nick Bromell’s upcoming The Plan” or How Five Young Conservatives Rescued America


 



Until now, “The Plan” has been merely a rumor. In the late 1980s,  young conservatives spent hours reverently speculating about it over drinks at “The Sign of the Indian King” on M Street, while across town frustrated young liberals in the think tanks around Dupont Circle darkly attributed every conservative victory to this mythic document.

By the mid-1990s, the myth started to fade as each succeeding triumph of the conservative movement made it increasingly improbable that any group, however brilliant, could have planed the whole campaign. Eventually people referred to “The Plan”  as one might refer to the Ark or to the gunman on the grassy knoll: intriguing but fantastical.


Brommell, however, was allowed to view the notes of a historian originally commissioned to write a history of The Plan — a project eventually discarded by the people who hired him — and publishes them for the first time in this online journal, revealing  both the inspiration for and the details of the secret manifesto that has guided the conservative movement for the past fourty years.


There are even anachronisms and contradictions that, for me at least, do much to confirm the veracity of the source.  One that has been mentioned by other commentators on the article is the fact that the included link to the National Enterprise Initiative (the organization founded by the authors of “The Plan” and which initially commissioned the aborted history) either doesn’t work or points to a bogus search site.  For many, this indicates that the original reporting is bogus.  But the obvious question remains as to why such a suposedly elaborate fiction will fail on such a minor detail as a web link?  Who doesn’t know how to post a weblink anymore? On the other side, is it really so remarkable that an organization that wishes to remain hidden should suddenly disappear, along with all traces of it, once an unmasking piece of journalism is published concerning it?  We are, after all, talking about the Internet, the veracity of which we all know to be dubious and mercurial, a vast palimpsest conspiracy. 


Does the fact that something is absent from the Internet prove that it does not exist?


Or is it rather the case, as Neuhaus wrote in his 1623 Advertissiment pieux et utile des freres de la Rosee-Croix, which demonstrated the true existence of the Rosicrucian Brotherhood, a secret society who claimed to guide the history of Europe but that had only first been heard of in 1614 in Germany and quickly became the main topic of European discussion for the next quarter century regarding primarily the question ‘do they or do they not exist’:



By the very fact that they change and alter their name and that they mask their age, and that, by their own confession, they come and go without making themselves known, there is no Logician that could deny the necessity that they exist.

Surrendering to Technology


My wife pulled into the garage yesterday after a shopping trip and called me out to her car to catch the tail end of a vignette on NPR about the Theater of Memory tradition Frances Yates rediscovered in the 60’s — a subject my wife knows has been a particular interest of mine since graduate school.  The radio essayist was discussing his attempt to create his own memory theater by forming the image of a series of rooms in his mind and placing strange mnemonic creatures representing different things he wanted to remember in each of the corners.  Over time, however, he finally came to the conclusion that there was nothing in his memory theater that he couldn’t find on the Internet and, even worse, his memory theater had no search button.  Eventually he gave up on the Renaissance theater of memory tradition and replaced it with Google.


I havn’t read Yates’s The Art of Memory for a long time, but it seemed to me that the guy on the radio had gotten it wrong, somehow.  While the art of memory began as a set of techniques allowing an orator to memorize topics about which he planned to speak, often for hours, over time it became something else.  The novice rhetorician would begin by spending a few years memorizing every nook and cranny of some building until he was able to recall every aspect of the rooms simply by closing his eyes.  Next he would spend several more years learning the techniques to build mnemonic images which he would then place in different stations of his memory theater in preparation for an oration.  The rule of thumb was that the most memorable images were also the most outrageous and monstrous.  A notable example originating in the Latin mnemonic textbook Ad Herennium is a ram’s testicles used as a place holder for a lawsuit, since witnesses must testify in court, and testify sounds like testicles.


As a mere technique, the importance of the theater of memory waned with the appearance of cheap paper as a new memory technology.  Instead of working all those years to make the mind powerful enough to remember a multitude of topics, topics can now be written down on paper and recalled as we like.  The final demise of the theater of memory is no doubt realized in the news announcer who reads off a teleprompter, being fed words to say as if they were being drawn from his own memory.  This is of course an illusion, and the announcer is merely a host for the words that flow through him.


A variation on the theater of memory not obviated by paper began to be formulated in the Renaissance in the works of men like Marsilio Ficino, Giulio Camillo, Giordano Bruno, Raymond Lull, and Peter Ramus.  Through them, the theater of memory was integrated with the Hermetic tradition, and the mental theater was transformed into something more than a mere technique for remembering words and ideas.  Instead, the Hermetic notion of the microcosm and macrocosm, and the sympathetic rules that could connect the two, became the basis for seeing the memory theater as a way to connect the individual with a world of cosmic and magical forces.  By placing objects in the memory theater that resonate with the celestial powers, the Renaissance magus was able to call upon these forces for insight and wisdom.


Since magic is not real, even these innovations are not so interesting on their own.  However the 18th century thinker, Giambattista Vico, both a rationalist and someone steeped in the traditions of Renaissance magic, recast the theater of memory one more time.  For Vico, the memory theater was not a repository for magical artifacts, but rather something that is formed in each of us through acculturation; it contains a knowledge of the cultural institutions, such as property rights, marriage, and burial (the images within our memory theaters), that are universal and make culture possible.  Acculturation puts these images in our minds and makes it possible for people to live together.  As elements of our individual memory theaters, these civilizing institutions are taken to be objects in the world, when in actuality they are images buried so deeply in our memories that they exert a remarkable influence over our behavior. 


Some vestige of this notion of cultural artifacts can be found in Richard Dawkins’s hypothesis about memes as units of culture.  Dawkins suggests that our thoughts are  made up, at least in part, of memes that influence our behavior in irrational but inexorable ways.  On analogy with his concept of genes as selfish replicators, he conceives of memes as things seeking to replicate themselves based on rules that are not necessarily either evident or rational.  His examples include, at the trivial end, songs that we can’t get out of our heads and, at the profound end, the concept of God.  For Dawkins, memes are not part of the hardwiring of the brain, but instead act like computer viruses attempting to run themselves on top of the brain’s hardware.


One interesting aspect of Dawkins’s interpretation of the spread of culture is that it also offers an explanation for the development of subcultures and fads.  Subcultures can be understood as communities that physically limit the available vectors for the spread of memes to certain communities, while fads can be explained away as short-lived viruses that are vital for a while but eventually waste their energies and disappear.  The increasing prevalence of visual media and the Internet, in turn, increase the number of vectors for the replication of memes, just as increased air-travel improves the ability of real diseases to spread across the world.


Dawkins describes the replication of memetic viruses in impersonal terms.  The purpose of these viruses is not to advance culture in any way, but rather simply to perpetuate themselves.  The cultural artifacts spread by these viruses are not guaranteed to improve us, no more than Darwinian evolution offers to make us better morally, culturally or intellectually.  Even to think in these terms is a misunderstanding of the underlying reality. Memes do not survive because we judge them to be valuable.  Rather, we deceive ourselves into valuing them because they survive. 


How different this is from the Renaissance conception of the memory theater, for which the theater existed to serve man, instead of man serving simply to host the theater.  Ioan Couliano, in the 80’s, attempted to disentangle Renaissance philosophy from its magical trappings to show that at its root the Renaissance manipulation of images was a proto-psychology.  The goal of the Hermeticist was to cultivate and order images in order to improve both mind and spirit.  Properly arranged, these images would help him to see the world more clearly, and allow him to live in it more deeply.


For after all what are we but the sum of our memories?  A technique for forming and organizing these memories — to actually take control of our memories instead of simply allowing them to influence us willy-nilly — such as the Renaissance Hermeticists tried to formulate could still be of great use to us today.  Is it so preposterous that by reading literature instead of trash, by controlling the images and memories that we allow to pour into us, we can actually structure what sort of persons we are and will become?


These were the ideas that initially occurred to me when I heard the end of the radio vignette while standing in the garage.  I immediately went to the basement and pulled out Umberto Eco’s The Search For The Perfect Language, which has an excellent chapter in it called Kabbalism and Lullism in Modern Culture that seemed germane to the topic.  As I sat down to read it, however, I noticed that Doom, the movie based on a video game, was playing on HBO, so I ended up watching that on the brand new plasma TV we bought for Christmas.


The premise of the film is that a mutagenic virus (a virus that creates mutants?) is found on an alien planet that starts altering the genes of people it infects and turns them into either supermen or monsters depending on some predisposition of the infected person’s nature.  (There is even a line in the film explaining that the final ten percent of the human genome that has not been mapped is believed to be the blueprint for the human soul.)  Doom ends with “The Rock” becoming infected and having to be put down before he can finish his transformation into some sort of malign creature.  After that I pulled up the NPR website in order to do a search on the essayist who abandoned his memory theater for Google.  My search couldn’t find him.

Two Kinds of Jargon


I had taken it for granted that “Web 2.0” is simply a lot of hype until I came across this defense of the term by Kathy Sierra by way of Steve Marx’s blog.  Kathy Sierra argues that “Web 2.0” is not simply a buzzword because it is, in fact, jargon.  She goes on to explore the notion of jargon and to explain why jargon is actually a good thing, and shamefully maligned.  This, I thought, certainly goes against the conventional wisdom. 


In my various careers, I have become intimately familiar with two kinds of jargon: academic jargon and software jargon.  I will discuss academic jargon first, and see if it sheds any light on software jargon.  The English word jargon is derived from the Old French word meaning “a chattering,” for instance of birds.  It is generally used somewhat pejoratively, as in this sentence from an article by George Packer in the most recent New Yorker concerning the efforts of anthropologists to make the “war on terror” more subtle as well as more culturally savvy:



One night earlier this year, Kilcullen sat down with a bottle of single-malt Scotch and wrote out a series of tips for company commanders about to be deployed to Iraq and Afghanistan.  He is an energetic writer who avoids military and social-science jargon, and he addressed himself intimately to young captains who have had to become familiar with exotica such as “The Battle of Algiers,” the 1966 film documenting the insurgency against French colonists.


 


In this passage, jargon is understood as a possibly necessary mode of professional language that, while it facilitates communication within a professional community, makes the dissemination of ideas outside of that community of speakers difficult.


Even with this definition, however, one can see how there is a sense in which the use of professional jargon is not a completely bad thing, but is in fact a trade-off.  While it makes speaking between professional communities difficult, as well as initiation into such a community difficult — for instance the initiation of young undergraduates into philosophical discourse–, once one is initiated into the argot of a professional community, the special language actually facilitates communication by serving as a short-hand for much larger concepts and by increasing the precision of the terms used within the community, since non-technical language tends to be ambiguous in a way that technical jargon, ideally, is not.  Take, for instance, the following sentences:



The question about that structure aims at the analysis of what constitutes existence. The context of such structures we call “existentiality“. Its analytic has the character of an understanding which is not existentiell, but rather existential. The task of an existential analytic of Dasein has been delineated in advance, as regards both its possibility and its necessity, in Dasein’s ontical constitution.


 


This passage is from the beginning of Martin Heidegger’s Being and Time, as translated by John Macquarrie and Edward Robinson.  To those unfamiliar with the jargon that Heidegger develops for his existential-phenomenology, it probably looks like balderdash.  One can see how potentially, with time and through reading the rest of this work, one might eventually come to understand Heidegger’s philosophical terms.  Jargon, qua jargon, is not necessarily bad, and much of the bad rap that jargon gets is often due to the resistance to comprehension and the sense of intellectual insecurity it engenders when one first encounters it.  Here is another example of jargon I pulled from a recent technical post on www.beyond3d.com called Origin of Quake3’s Fast InvSqrt():



The magic of the code, even if you can’t follow it, stands out as the i = 0x5f3759df – (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.


 


I don’t understand what the author of this passage is saying, but I do know that he is enthusiastic about it and assume that, as with the Heidegger passage, I can come to understand the gist of the argument given a week and a good reference work.  I also believe that the author is trying to say what he is saying in the most precise and concise way he is able, and this is why he resorts to one kind of  jargon to explain something that was originally written in an even more complicated technical language: a beautiful computer algorithm.


However there is another, less benign, definition for jargon that sees its primary function not in clarifying concepts, but in obfuscating them.  According to Theodor Adorno, in his devastating and unrelenting attack on Heidegger in The Jargon of Authenticity, jargon is “a sublanguage as superior language.”  For Adorno jargon, especially in Heidegger’s case, is an imposture and a con.  It is the chosen language of charlatans. Rudolf Carnap makes a similar, but not so brutal, point in section 5 of his “Overcoming Metaphysics” entitled “Metaphysical Pseudo-Sentences”, where he takes on Heidegger’s notorious sentence from Being and Time, “Das Nichts selbst nichtet” (Nothingness itself nothings) for its meaninglessness.


We might be tempted to try to save jargon from itself, then, by distinguishing two kinds of jargon: good jargon and bad jargon.  Turning to distinctions is at least as old as the use of jargon in order to clarify ideas, and goes back as far as, if not farther than, Phaedrus’s distinction between the heavenly and the common Aphrodites in Plato’s The Symposium.  With Phaedrus we can say that the higher and the baser jargon can be distinguished, as he distinguishes two kinds of love, by the intent of the person using jargon.  When jargon is used in order to clarify ideas and make them precise, then we are dealing with proper jargon.  When jargon is used, contrarily, to obfuscate, or to make the speaker seem smarter than he really is, then this is deficient or bad jargon.


There are various Virgin and the Whore problems with this distinction, however, not least of which is how to tell the two kinds of jargon apart.  It is in fact rather rare to find instances of bad jargon that everyone concedes is bad jargon, with the possible exception of hoaxes like the Sokal affair, in which physicist Alan Sokal wrote a jargon laden pseudo-paper about post-modernism and quantum mechanics, and got it published in a cultural studies journal.  Normally, however, when certain instances of jargon are identified as “bad” jargon, we also tend to find defenders who insist that it is not and claim that, to the contrary, those calling it bad jargon simply do not understand it.  This is a difficulty not unlike one which a wit described when asked to define bad taste.  “Bad taste,” he said, “is the garden gnome standing in my neighbors front lawn.”  When asked to define good taste, the wit continued, “Good taste is that plastic pink flamingo standing in my lawn.”


There are more difficulties with trying to distinguish good jargon from bad jargon, such as cases where good jargon becomes bad over time, or even cases where bad jargon becomes good.  Cases of the latter include Schopenhauer’s reading of a popular and apparently largely incorrect account of of Indian philosophy and then absorbing this into his own very insightful and influential philosophical project.  Georges Bataille’s misreading of Hegel and Jacques Lacan’s misreading of Freud also bore impressive fruit.  Finally, there’s the (probably apocryphal) story of the student of Italian who approached T.S. Eliot and began asking him about his peculiar and sometimes incorrect use of Italian in his poetry, until Eliot finally broke off the conversation with the admission, “Okay, you caught me.”   Cases such as these undermine the common belief that it is intent, or origins, which make a given jargon good or bad.


The opposite can, of course, also happen.  Useful jargon may, over time, become bad and obfuscating.  We might then say that while the terms used in Phenomenology proper are difficult but informative, they were corrupted when Heidegger took them up in his Existential-Phenomenology, or we might say that Heidegger’s jargon is useful but later philosophers influenced by his philosophy such as Derrida and the post-structuralists corrupted it, or finally we might even say that Derrida got it right but his epigones in America were the ones who ultimately turned his philosophical insights into mere jargon.  This phenomenon is what I take Martin Fowler to be referring to in his short bliki defense of the terms Web 2.0 and Agile entitled Semantic Diffusion.  According to Fowler:



Semantic diffusion occurs when you have a word that is coined a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely – and with it any usefulness to the term.


Thus Fowler takes up Kathy Sierra’s defense of Web 2.0 as jargon, recognizes some weaknesses in this explanation, and then fortifies the defense of the term with the further explanation that while the term may be problematic now, it was right in its origins, and pure in its intent.


Fowler here makes a remarkably Heideggerian observation.  Heidegger was somewhat obsessed with language and believed that language tends, over time, to hide and obfuscate meaning, when it should rather shed light on things.  Along this vein,  Being and Time begins with the claim that we today no longer understand the meaning of Being, and that this forgetting is so thorough that we are not even any longer aware of this absence of understanding, so that even the question “What Is Being?”, which should be the most important question for us, is for the most part ignored and overlooked.  To even begin understanding Being, then, we must first try to understand the meaning of the question of Being.  We must first come to the realization that there is even a problem there in the first place which needs to be resolved.  Heidegger’s chosen solution to this problem involves the claim that while language conceals meaning, it also, in its origins, is able to reveal it if we are able to come to understand language correctly.  He gives an example with the term aletheia, which in Greek means truth.  By getting to the origins of language and the experience of language, we can reveal aletheia. Aletheia, etymologically, means not-forgetting (thus the river Lethe is, in Greek mythology, the river of forgetting that the dead must cross before resting in Hades), and so the truth is implicitly an unconcealment that recovers the meanings implicit in language.  The authentic meaning of jargon, Fowler similarly claims, can be arrived at if we remove the accretions caused by “semantic diffusion” and get back to the original intent.


But is this true?  Do apologetics for terms such as “Web 2.0” and “Agile” insisting that they are “jargon” ultimately succeed?  Do such attempts reveal the original intent implicit in the coining of these terms or do they simply conceal the original meanings even further?


My personal opinion is that jargon, by its nature, never really reveals, but always in one way or another, by condensing thought and providing a shorthand for ideas, conceals.  It can of course be useful, but it can never be instructive, but rather gives us a sense that we understand things we do not understand simply because we know how to use a given jargon.  At best, jargon can be used as an indicator that points to a complex of ideas shared by a given community.  At worse, it is used as shorthand for bad or incoherent ideas that never themselves get critical treatment because the jargon takes the place of ideas, and becomes mistaken for ideas.  This particularly seems to be the case with the defense of “Web 2.0” and “Agile” as “jargon”, as if people have a problem with the terms themselves rather than what they stand for.  “Jargon”, as a technical term, is not particularly useful.  It is to some extent already corrupt from the get-go.


One way around this might be to simply stop using the term “jargon”, whether bad or good, when discussing things like Web 2.0 and Agile.  While it is common in English to use Latin derived terms for technical language and Anglo-Saxon words for common discourse, in this case we might be obliged to make the reverse movement as we look for an adequate replacement term for “jargon”.


In 2005, the Princeton philosopher Harry Frankfurt published a popular pamphlet called On Bullshit that attempts to give a philosophical explanation of the term. On first blush, this title may seem somewhat prejudicial, but I think that, as with jargon, if we get away from pre-conceived notions as to whether the term is good or bad, it will be useful as a way to get a fresh look at the term we are currently trying to evaluate, “Web 2.0”.  It can also be used most effectively if we do the opposite of what we did with “jargon”; jargon was first taken to appropriately describe the term “Web 2.0”, and then an attempt was made to understand what jargon actually was.  In this case, I want to first try to understand what bullshit is, and then see if it applies to “Web 2.0”.


Frankfurt begins his analysis with a brief survey of the literature on bullshit, which includes Max Black’s study of “humbug” and Augustine of Hippo’s analysis of lying.  From these, he concludes that bullshit and lying are different things, and as a preliminary conclusion, that bullshit falls just short of lying.  Moreover, he points out that it is all pervasive in a way that lying could never be.



The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.


Not satisfied with this preliminary explanation, however, Frankfurt identifies further elements that characterize bullshit, since there are many things that can fall short of a lie and yet, perhaps, not rise to the level of bullshit.  He then identifies inauthenticity as the hallmark that distinguishes bullshit from lies, on the one hand, and simple errors of fact, on the other.



For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.


 


It is not what a bullshitter says, then, that marks him as a bullshitter, but rather his state-of-mind when he says it.  For Frankfurt, bullshit doesn’t really even belong on the same continuum with truth and falsehood, but is rather opposed to both.  Like the Third Host in Dante’s Inferno, it is indifference to the struggle that ultimately identifies and marks out the class of bullshitters.


Again, there are echoes of Heidegger here.  According to Heidegger, we are all characterized by this “thrownness”, which is the essence of our “Being-In-The-World”.  In our thrownness, we do not recognize ourselves as ourselves, but rather as das Man, or as the they-self,



which we distinguish from the authentic Self – that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself.” And further “If Dasein discovers the world in its own way [eigens] and brings it close, if it discloses to itself its own authentic Being, then this discovery of the ‘world’ and this disclosure of Dasein are always accomplished as a clearing-away of concealments and obscurities, as a breaking up of the disguises with which Dasein bars its own way.


The main difference between Frankfurt’s and Heidegger’s analysis of authenticity, in this case, is that Frankfurt seems to take authenticity as normative, whereas Heidegger considers authenticity as the zero-point state of man when we are first thrown into the world.


For now, however, the difference isn’t all that important.  What is important is Frankfurt’s conclusion about the sources of bullshit.  At the end of his essay, Frankfurt in effect writes that there are two kinds of bullshit, one of which is defensible and one of which is not.  The indefensible kind of bullshit is based on a subjectivist view of the world which denies truth and falsity altogether (and here I take Frankfurt to be making a not too veiled attack on the relativistic philosophical disciplines that are based on Heidegger’s work).  The defensible form of bullshit — I hesitate to call it good bullshit — is grounded in the character of our work lives, which force us to work with and represent information that is by its nature too complex for us to digest and promulgate accurately.  This, I take it, is the circumstance academic lecturers and others frequently find themselves in, as the stand behind the podium and are obliged to talk authoritatively about subjects they do not feel up to giving a thorough, much less an authentic, account of.



Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.


 


This class of speech is the result of our inability to apply Wittgenstein’s dictum, “Whereof one cannot speak, thereof one must be silent.”  There are times when we are not in a position to remain silent, and so are obligated to bullshit.  Bullshit, in these cases, is a way of making the best of our situation.


Per the original arrangement, it is now time to put  “bullshit” to the test and see if either cynical bullshit or benign bullshit can be ascribed to the term “Web 2.0”.  For better or worse, I am going to use Jeffrey Zeldman’s blog on Web 2.0 (titled, confusingly enough, “Web 3.0”) as the main text for this analysis.  Zeldman is generally sympathetic to the ideas and phenomena the “Web 2.0” is meant to encompass, but he also points out the aspects of the term that grate.  The most salient is the degree to which it smells like a sales pitch.


 



It soon appeared that “Web 2.0” was not only bigger than the Apocalypse but also more profitable. Profitable, that is, for investors like the speaker. Yet the new gold rush must not be confused with the dot-com bubble of the 1990s:

“Web 1.0 was not disruptive. You understand? Web 2.0 is totally disruptive. You know what XML is? You’ve heard about well-formedness? Okay. So anyway—”

And on it ran, like a dentist’s drill in the Gulag.


Zeldman associates Web 2.0 with marketing, which Frankfurt in turn associates with bullshit.  Frankfurt even goes so far as identifying sales and its related disciplines as “the most indisputable and classic paradigms of the concept.”  Moreover, the defense that Web 2.0 describes a real phenomenon, as Fowler insists and Zeldman grants, doesn’t make it not bullshit, since Frankfurt concedes that bullshit can just as well be true as false.  What is important is the authenticity or inauthenticity of the original claim, and the sense that something is a sales pitch is already an indication that something inauthentic is going on.  So “Web 2.0” certainly meets Frankfurt’s criteria for bullshit.


The more important question is what kind of bullshit is it?  Is it benign, or cynical?  According to Frankfurt’s distinction, again, the difference is whether the bullshit is grounded in the nature of one’s work or rather in some sort of defect of epistemic character.


Here the answer is not so simple, I think, since software has two strands, one going to the hobbyist roots of programming, and the other to the monetizing potential of information technology.  Moreover, both strands tend to struggle within the heart of the software engineering industry, with the open source movement on the one hand (often cited as one key aspect of Web 2.0) emblematic of the purist strain, and the advertising prospects on the other (with Google in the vanguard, often cited as a key exemplar of the Web 2.0 phenomena) symbolic of the notion that a good idea isn’t enough — one also has to be able to sell one’s ideas.


Software programming, in its origins, is a discipline practiced by nerds.  In other words, it is esoteric knowledge, extremely powerful, practiced by a few and generally misunderstood by the majority of people.  As long as there is no desire to explain the discipline to outsiders, there is no problem with treating software programming as a hobby.  At some point, however, every nerd wants to be appreciated by people who are not his peers, and to accomplish this, he is forced to explain himself and ultimately to sell himself.  The turning point for this event is well documented, and occurred on February 3rd, 1973, when Bill Gates wrote an open letter to the hobbyist community stating that software had economic value and that it was time for people to start paying for it. 


This was a moment of triumph for nerds everywhere, though this was not at first understood, and still generates resentment to this day, because it irrevocably transformed the nature of software programming.  Once software was recognized as something of economic value, it also became clear that software concepts now had to be marketed.  The people who buy software are typically unable to distinguish good software from bad software, and so it becomes the responsibility of those who can to try to explain why their software is better in terms that are not, essentially, technical.  Instead, a hybrid jargon-ridden set of terms had to be created in order to bridge the gap between software and the business appetite for software.  Software engineers, in turn, learned to see selling themselves, to consumers, to their managers, and finally to their peers, as part of the job of software engineering — though at the same time, this forced obligation to sell themselves continues to be regarded with with suspicion and resentment.  The hope held out to such people is that through software they will eventually be able to make enough money, as Bill Gates did, as Steve Jobs did, to finally give up the necessity of selling themselves and return to a pure hobbyist state-of-mind once again.  They in effect want to be both the virgin and the whore.  This is, of course, a pipe dream.


Consequently, trying to determine whether Web 2.0 is benign bullshit or cynical bullshit is difficult, since sales both is and is not an authentic aspect of the work of software engineering.  What seems to be the case is that Web 2.0 is a hybrid of benign and cynical bullshit.  This schizophrenic character is captured in the notion of Web 2.0 itself, which is at the same time a sales pitch as well as an umbrella term for a set of contemporary cultural phenomena.


Now that we know what bullshit is, and we know that Web 2.0 is bullshit, it is time to evaluate what Web 2.0 is.  In Tim O’Reilly’s original article that introduced the notion of Web 2.0, called appropriately What Is Web 2.0, O’Reilly suggests several key points that he sees as typical of the sorts of things going on over the past year at companies such as Google, Flikr, YouTube and Wikipedia.  These observations include such slogans as “Harnessing Collective Intelligence”, “Data is the Next Intel Inside” and “End of the Software Release Cycle”.  But it is worth asking if these really tell us what Web 2.0 is, or if they are simply ad hoc attempts to give examples of what O’Reilly says is a common phenomenon?  When one asks for the meaning of terms such as Web 2.0, what one really wants is the original purpose behind coining the term.  What is implicit in the term Web 2.0, as Heidegger would put it, that at the same time is concealed by the language typically used to explain Web 2.0.


As Zeldman points out, one key (and I think the main key) to understanding Web 2.0 is that it isn’t Web 1.0.  The rise of the web was marked by a rise in bluff and marketing that created what we now look back on as the Internet Bubble.  The Internet Bubble, in turn, was a lot of marketing hype and the most remarkable stream of jargon used to build up a technology that, in the end, could not sustain the amount of expectation with which it was overloaded.  By 2005, this bad reputation that had accrued to the Web from the earlier mistakes had generated a cynicism about the new things coming along that really were worthwhile — such as the blogging phenomenon, Ajax, Wikipedia, Google, Flikr and YouTube.  In order to overcome the cynicism, O’Reilly coined a term that, successfully, distracted people from the earlier debacle and helped to make the Internet a place to invest money once again.  Tim O’Reilly, even if his term is bullshit, as we have already demonstrated above, ultimately has done us all a service by clearing out all the previous bullshit.  In a very Heideggerian manner, he made a clearing [Lichtung] for the truth to appear.  He created, much as Heidegger attempted to do for Being, a conceptual space for new ideas about the Internet to make themselves apparent.


Or perhaps my analogy is overwrought. In any case, the question still remains as to what one does with terms that have outlived their usefulness.  In his introduction to Existentialism and Human Emotions, Jean-Paul Sartre describes the status the term “existentialism” has achieved by 1957.



Someone recently told me of a lady who, when she let slip a vulgar word in a moment of irritation, excused herself by saying, “I guess I’m becoming an existentialist.”



Most people who use the word would be rather embarrassed if they had to explain it, since, now that the word is all the rage, even the work of a musician or painter is being called existentialist.  A gossip columnist in Clartes signs himself The Existentialist, so that by this time the word has been so stretched and has taken on so broad a meaning, that it no longer means anything at all.


 


Sartre spent most of the rest of his philosophical career refining and defending the term “existentialism,” until finally it was superceded by post-structuralism in France. The term enjoyed a second life in America, until post-structuralism finally made the Atlantic crossing and superceded it there, also, only in turn to be first treated with skepticism, then with hostility, and finally as mere jargon.  It is only over time that an intellectual clearing can be made to re-examine these concepts.  In the meantime, taking a cue from Wittgenstein, we are obliged to remain silent over them.

Authentically Virtual