Speech Recognition And Synthesis Managed APIs In Windows Vista: Part II


Playing with the speech synthesizer is a lot of fun for about five minutes (ten if you have both Microsoft Anna and Microsoft Lila to work with)  — but after typing “Hello World” into your Speechpad document for the umpteenth time, you may want to do something a bit more challenging.  If you do, then it is time to plug in your expensive microphone, since speech recognition really works best with a good expensive microphone.  If you don’t have one, however, then go ahead and plug in a cheap microphone.  My cheap microphone seems to work fine.  If you don’t have a cheap microphone, either, I have heard that you can take a speaker and plug it into the mic jack of your computer, and if that doesn’t cause an explosion, you can try talking into it.


While speech synthesis may be useful for certain specialized applications, voice commands, by cantrast, are a feature that can be used to enrich any current WinForms application. With the SR Managed API, it is also easy to implement once you understand certain concepts such as the Grammar class and the SpeechRecognitionEngine.


We will begin by declaring a local instance of the speech engine and initializing it. 

	#region Local Members

private SpeechSynthesizer synthesizer = null;
private string selectedVoice = string.Empty;
private SpeechRecognitionEngine recognizer = null;

#endregion

public Main()
{
InitializeComponent();
synthesizer = new SpeechSynthesizer();
LoadSelectVoiceMenu();
recognizer = new SpeechRecognitionEngine();
InitializeSpeechRecognitionEngine();
}

private void InitializeSpeechRecognitionEngine()
{
recognizer.SetInputToDefaultAudioDevice();
Grammar customGrammar = CreateCustomGrammar();
recognizer.UnloadAllGrammars();
recognizer.LoadGrammar(customGrammar);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(recognizer_SpeechHypothesized);
}

private Grammar CreateCustomGrammar()
{
GrammarBuilder grammarBuilder = new GrammarBuilder();
grammarBuilder.Append(new Choices(“cut”, “copy”, “paste”, “delete”));
return new Grammar(grammarBuilder);
}


The speech recognition engine is the main workhorse of the speech recognition functionality.  At one end, we configure the input device that the engine will listen on.  In this case, we use the default device (whatever you have plugged in), though we can also select other inputs, such as specific wave files.  At the other end, we capture two events thrown by our speech recognition engine.  As the engine attempts to interpret the incoming sound stream, it will throw various “hypotheses” about what it thinks is the correct rendering of the speech input.  When it finally determines the correct value, and matches it to a value in the associated grammar objects, it throws a speech recognized event, rather than a speech hypothesized event.  If the determined word or phrase does not have a match in any associated grammar, a speech recognition rejected event (which we do not use in the present project) will be thrown instead.


In between, we set up rules to determine which words and phrases will throw a speech recognized event by configuring a Grammar object and associating it with our instance of the speech recognition engine.  In the sample code above, we configure a very simple rule which states that a speech recognized event will be thrown if any of the following words: “cut“, “copy“, “paste“, and “delete“, is uttered.  Note that we use a GrammarBuilder class to construct our custom grammar, and that the syntax of the GrammarBuilder class closely resembles the syntax of the StringBuilder class.


This is the basic code for enabling voice commands for a WinForms application.  We will now enhance the Speechpad application by adding a menu item to turn speech recognition on and off,  a status bar so we can watch as the speech recognition engine interprets our words, and a function that will determine what action to take if one of our key words is captured by the engine.


Add a new menu item labeled “Speech Recognition” under the “Speech” menu item, below “Read Selected Text” and “Read Document”.  For convenience, name it speechRecognitionMenuItem.  Add a handler to the new menu item, and use the following code to turn speech recognition on and off, as well as toggle the speech recognition menu item.  Besides the RecognizeAsync() method that we use here, it is also possible to start the engine synchronously or, by passing it a RecognizeMode.Single parameter, cause the engine to stop after the first phrase it recognizes. The method we use to stop the engine, RecognizeAsyncStop(), is basically a polite way to stop the engine, since it will wait for the engine to finish any phrases it is currently processing before quitting. An impolite method, RecognizeAsyncCancel(), is also available — to be used in emergency situations, perhaps.

        private void speechRecognitionMenuItem_Click(object sender, EventArgs e)
{
if (this.speechRecognitionMenuItem.Checked)
{
TurnSpeechRecognitionOff();
}
else
{
TurnSpeechRecognitionOn();
}
}

private void TurnSpeechRecognitionOn()
{
recognizer.RecognizeAsync(RecognizeMode.Multiple);
this.speechRecognitionMenuItem.Checked = true;
}

private void TurnSpeechRecognitionOff()
{
if (recognizer != null)
{
recognizer.RecognizeAsyncStop();
this.speechRecognitionMenuItem.Checked = false;
}
}


We are actually going to use the RecognizeAsyncCancel() method now, since there is an emergency situation. The speech synthesizer, it turns out, cannot operate if the speech recognizer is still running. To get around this, we will need to disable the speech recognizer at the last possible moment, and then reactivate it once the synthesizer has completed its tasks. We will modify the ReadAloud() method to handle this.


private void ReadAloud(string speakText)
{
try
{
SetVoice();
recognizer.RecognizeAsyncCancel();
synthesizer.Speak(speakText);
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

}

The user now has the ability to turn speech recognition on and off. We can make the application more interesting by capturing the speech hypothesize event and displaying the results to a status bar on the Main form.  Add a StatusStrip control to the Main form, and a ToolStripStatusLabel to the StatusStrip with its Spring property set to true.  For convenience, call this label toolStripStatusLabel1.  Use the following code to handle the speech hypothesized event and display the results:

private void recognizer_SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
{
GuessText(e.Result.Text);
}

private void GuessText(string guess)
{
toolStripStatusLabel1.Text = guess;
this.toolStripStatusLabel1.ForeColor = Color.DarkSalmon;
}


Now that we can turn speech recognition on and off, as well as capture misinterpretations of the input stream, it is time to capture the speech recognized event and do something with it.  The SpeechToAction() method will evaluate the recognized text and then call the appropriate method in the child form (these methods are accessible because we scoped them internal in the Textpad code above).  In addition, we display the recognized text in the status bar, just as we did with hypothesized text, but in a different color in order to distinguish the two events.


private void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
string text = e.Result.Text;
SpeechToAction(text);
}

private void SpeechToAction(string text)
{
TextDocument document = ActiveMdiChild as TextDocument;
if (document != null)
{
DetermineText(text);

switch (text)
{
case “cut”:
document.Cut();
break;
case “copy”:
document.Copy();
break;
case “paste”:
document.Paste();
break;
case “delete”:
document.Delete();
break;
}
}
}

private void DetermineText(string text)
{
this.toolStripStatusLabel1.Text = text;
this.toolStripStatusLabel1.ForeColor = Color.SteelBlue;
}


Now let’s take Speechpad for a spin.  Fire up the application and, if it compiles, create a new document.  Type “Hello world.”  So far, so good.  Turn on speech recognition by selecting the Speech Recognition item under the Speech menu.  Highlight “Hello” and say the following phrase into your expensive microphone, inexpensive microphone, or speaker: delete.  Now type “Save the cheerleader, save the”.  Not bad at all.

Speech Recognition And Synthesis Managed APIs In Windows Vista: Part I




VistaSpeechAPIDemo.zip – 45.7 Kb


VistaSpeechAPISource.zip – 405 Kb


Introduction


One of the coolest features to be introduced with Windows Vista is the new built in speech recognition facility.  To be fair, it has been there in previous versions of Windows, but not in the useful form in which it is now available.  Best of all, Microsoft provides a managed API with which developers can start digging into this rich technology.  For a fuller explanation of the underlying technology, I highly recommend the Microsoft whitepaper. This tutorial will walk the user through building a common text pad application, which we will then trick out with a speech synthesizer and a speech recognizer using the .Net managed API wrapper for SAPI 5.3. By the end of this tutorial, you will have a working application that reads your text back to you, obeys your voice commands, and takes dictation. But first, a word of caution: this code will only work for Visual Studio 2005 installed on Windows Vista. It does not work on XP, even with .NET 3.0 installed.

Background


Because Windows Vista has only recently been released, there are, as of this writing, several extant problems relating to developing on the platform.  The biggest hurdle is that there are known compatibility problems between Visual Studio and Vista.  Visual Studio.NET 2003 is not supported on Vista, and there are currently no plans to resolve any compatibility issues there.  Visual Studio 2005 is supported,  but in order to get it working well, you will need to make sure you also install service pack 1 for Visual Studio 2005.  After this, you will also need to install a beta update for Vista called, somewhat confusingly, “Visual Studio 2005 Service Pack 1 Update for Windows Vista Beta”.  Even after doing all this, you will find that all the new cool assemblies that come with Vista, such as the System.Speech assembly, still do not show up in your Add References dialog in Visual Studio.  If you want to have them show up, you will finally need to add a registry entry indicating where the Vista dll’s are to be found.  Open the Vista registry UI by running regedit.exe in your Vista search bar.  Add the following registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders\v3.0 Assemblies with this value: C:\\Program Files\\Reference Assemblies\\Microsoft\\Framework\\v3.0. (You can also install it under HKEY_CURRENT_USER, if you prefer.)  Now, we are ready to start programming in Windows Vista.

Before working with the speech recognition and synthesis functionality, we need to prepare the ground with a decent text pad application to which we will add on our cool new toys. Since this does not involve Vista, you do not really have to follow through this step in order to learn the speech recognition API.  If you already have a good base application, you can skip ahead to the next section, Speechpad, and use the code there to trick out your app.  If you do not have a suitable application at hand, but also have no interest in walking through the construction of a text pad application, you can just unzip the source code linked above and pull out the included Textpad project.  The source code contains two Visual Studio 2005 projects, the Textpad project, which is the base application for the SR functionality, and Speechpad, which includes the final code.


All the same, for those with the time to do so, I feel there is much to gain from building an application from the ground up. The best way to learn a new technology is to use it oneself and to get one’s hands dirty, as it were, since knowledge is always more than simply knowing that something is possible; it also involves knowing how to put that knowledge to work. We know by doing, or as Giambattista Vico put it, verum et factum convertuntur.


Textpad


Textpad is an MDI application containing two forms: a container, called Main.cs, and a child form, called TextDocument.csTextDocument.cs, in turn, contains a RichTextBox control.


Create a new project called Textpad.  Add the “Main” and “TextDocument” forms to your project.  Set the IsMdiContainer property of Main to true.  Add a MainMenu control and an OpenFileDialog control (name it “openFileDialog1”) to Main.  Set the Filter property of the OpenFileDialog to “Text Files | *.txt”, since we will only be working with text files in this project.  Add a RichTextBox control to “TextDocument”, name it “richTextBox1”; set its Dock property to “Fill” and its Modifiers property to “Internal”.


Add a MenuItem control to MainMenu called “File” by clicking on the MainMenu control in Designer mode and typing “File” where the control prompts you to “type here”.  Set the File item’s MergeType property to “MergeItems”. Add a second MenuItem called “Window“.  Under the “File” menu item, add three more Items: “New“, “Open“, and “Exit“.  Set the MergeOrder property of the “Exit” control to 2.  When we start building the “TextDocument” form, these merge properties will allow us to insert menu items from child forms between “Open” and “Exit”.


Set the MDIList property of the Window menu item to true.  This automatically allows it to keep track of your various child documents during runtime.


Next, we need some operations that will be triggered off by our menu commands.  The NewMDIChild() function will create a new instance of the Document object that is also a child of the Main container.  OpenFile() uses the OpenFileDialog control to retrieve the path to a text file selected by the user.  OpenFile() uses a StreamReader to extract the text of the file (make sure you add a using declaration for System.IO at the top of your form). It then calls an overloaded version of NewMDIChild() that takes the file name and displays it as the current document name, and then injects the text from the source file into the RichTextBox control in the current Document object.  The Exit() method closes our Main form.  Add handlers for the File menu items (by double clicking on them) and then have each handler call the appropriate operation: NewMDIChild(), OpenFile(), or Exit().  That takes care of your Main form.

        #region Main File Operations

private void NewMDIChild()
{
NewMDIChild(“Untitled”);
}

private void NewMDIChild(string filename)
{
TextDocument newMDIChild = new TextDocument();
newMDIChild.MdiParent = this;
newMDIChild.Text = filename;
newMDIChild.WindowState = FormWindowState.Maximized;
newMDIChild.Show();
}

private void OpenFile()
{
try
{
openFileDialog1.FileName = “”;
DialogResult dr = openFileDialog1.ShowDialog();
if (dr == DialogResult.Cancel)
{
return;
}
string fileName = openFileDialog1.FileName;
using (StreamReader sr = new StreamReader(fileName))
{
string text = sr.ReadToEnd();
NewMDIChild(fileName, text);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void NewMDIChild(string filename, string text)
{
NewMDIChild(filename);
LoadTextToActiveDocument(text);
}

private void LoadTextToActiveDocument(string text)
{
TextDocument doc = (TextDocument)ActiveMdiChild;
doc.richTextBox1.Text = text;
}

private void Exit()
{
Dispose();
}

#endregion


To the TextDocument form, add a SaveFileDialog control, a MainMenu control, and a ContextMenuStrip control (set the ContextMenuStrip property of richTextBox1 to this new ContextMenuStrip).  Set the SaveFileDialog’s defaultExt property to “txt” and its Filter property to “Text File | *.txt”.  Add “Cut”, “Copy”, “Paste”, and “Delete” items to your ContextMenuStrip.  Add a “File” menu item to your MainMenu, and then “Save“, Save As“, and “Close” menu items to the “File” menu item.  Set the MergeType for “File” to “MergeItems”. Set the MergeType properties of “Save”, “Save As” and “Close” to “Add”, and their MergeOrder properties to 1.  This creates a nice effect in which the File menu of the child MDI form merges with the parent File menu.


The following methods will be called by the handlers for each of these menu items: Save(), SaveAs(), CloseDocument(), Cut(), Copy(), Paste(), Delete(), and InsertText(). Please note that the last five methods are scoped as internal, so they can be called by the parent form. This will be particularly important as we move on to the Speechpad project.


#region Document File Operations

private void SaveAs(string fileName)
{
try
{
saveFileDialog1.FileName = fileName;
DialogResult dr = saveFileDialog1.ShowDialog();
if (dr == DialogResult.Cancel)
{
return;
}
string saveFileName = saveFileDialog1.FileName;
Save(saveFileName);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void SaveAs()
{
string fileName = this.Text;
SaveAs(fileName);
}

internal void Save()
{
string fileName = this.Text;
Save(fileName);
}

private void Save(string fileName)
{
string text = this.richTextBox1.Text;
Save(fileName, text);
}

private void Save(string fileName, string text)
{
try
{
using (StreamWriter sw = new StreamWriter(fileName, false))
{
sw.Write(text);
sw.Flush();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

private void CloseDocument()
{
Dispose();
}

internal void Paste()
{
try
{
IDataObject data = Clipboard.GetDataObject();
if (data.GetDataPresent(DataFormats.Text))
{
InsertText(data.GetData(DataFormats.Text).ToString());
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

internal void InsertText(string text)
{
RichTextBox theBox = richTextBox1;
theBox.SelectedText = text;
}

internal void Copy()
{
try
{
RichTextBox theBox = richTextBox1;
Clipboard.Clear();
Clipboard.SetDataObject(theBox.SelectedText);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}

internal void Cut()
{
Copy();
Delete();
}

internal void Delete()
{
richTextBox1.SelectedText = string.Empty;
}

#endregion


Once you hook up your menu item event handlers to the methods listed above, you should have a rather nice text pad application. With our base prepared, we are now in a position to start building some SR features.


Speechpad


Add a reference to the System.Speech assembly to your project.  You should be able to find it in C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\.  Add using declarations for System.Speech, System.Speech.Recognition, and System.Speech.Synthesis to your Main form. The top of your Main.cs file should now look something like this:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.IO;
using System.Speech;
using System.Speech.Synthesis;
using System.Speech.Recognition;

In design view, add two new menu item to the main menu in your Main form labeled “Select Voice” and “Speech“.  For easy reference, name the first item selectVoiceMenuItem.  We will use the “Select Voice” menu to programmatically list the synthetic voices that are available for reading Speechpad documents.  To programmatically list out all the synthetic voices, use the following three methods found in the code sample below.  LoadSelectVoiceMenu() loops through all voices that are installed on the operating system and creates a new menu item for each.  VoiceMenuItem_Click() is simply a handler that passes the click event on to the SelectVoice() method. SelectVoice() handles the toggling of the voices we have added to the “Select Voice” menu.  Whenever a voice is selected, all others are deselected.  If all voices are deselected, then we default to the first one.


Now that we have gotten this far, I should mention that all this trouble is a little silly if there is only one synthetic voice available, as there is when you first install Vista. Her name is Microsoft Anna, by the way. If you have Vista Ultimate or Vista Enterprise, you can use the Vista Updater to download an additional voice, named Microsoft Lila, which is contained in the Simple Chinese MUI.  She has a bit of an accent, but I am coming to find it rather charming.  If you don’t have one of the high-end flavors of Vista, however, you might consider leaving the voice selection code out of your project.


private void LoadSelectVoiceMenu()
{
foreach (InstalledVoice voice in synthesizer.GetInstalledVoices())
{
MenuItem voiceMenuItem = new MenuItem(voice.VoiceInfo.Name);
voiceMenuItem.RadioCheck = true;
voiceMenuItem.Click += new EventHandler(voiceMenuItem_Click);
this.selectVoiceMenuItem.MenuItems.Add(voiceMenuItem);
}
if (this.selectVoiceMenuItem.MenuItems.Count > 0)
{
this.selectVoiceMenuItem.MenuItems[0].Checked = true;
selectedVoice = this.selectVoiceMenuItem.MenuItems[0].Text;
}
}

private void voiceMenuItem_Click(object sender, EventArgs e)
{
SelectVoice(sender);
}

private void SelectVoice(object sender)
{
MenuItem mi = sender as MenuItem;
if (mi != null)
{
//toggle checked value
mi.Checked = !mi.Checked;

if (mi.Checked)
{
//set selectedVoice variable
selectedVoice = mi.Text;
//clear all other checked items
foreach (MenuItem voiceMi in this.selectVoiceMenuItem.MenuItems)
{
if (!voiceMi.Equals(mi))
{
voiceMi.Checked = false;
}
}
}
else
{
//if deselecting, make first value checked,
//so there is always a default value
this.selectVoiceMenuItem.MenuItems[0].Checked = true;
}
}
}


We have not declared the selectedVoice class level variable yet (your Intellisense may have complained about it), so the next step is to do just that.  While we are at it, we will also declare a private instance of the System.Speech.Synthesis.SpeechSynthesizer class and initialize it, along with a call to the LoadSelectVoiceMenu() method from above, in your constructor:


#region Local Members

private SpeechSynthesizer synthesizer = null;
private string selectedVoice = string.Empty;

#endregion

public Main()
{
InitializeComponent();
synthesizer = new SpeechSynthesizer();
LoadSelectVoiceMenu();
}


To allow the user to utilize the speech synthesizer, we will add two new menu items under the “Speech” menu labeled “Read Selected Text” and “Read Document“.  In truth, there isn’t really much to using the Vista speech synthesizer.  All we do is pass a text string to our local SpeechSynthesizer object and let the operating system do the rest.  Hook up event handlers for the click events of these two menu items to the following methods and you will be up and running with an SR enabled application:


#region Speech Synthesizer Commands

private void ReadSelectedText()
{
TextDocument doc = ActiveMdiChild as TextDocument;
if (doc != null)
{
RichTextBox textBox = doc.richTextBox1;
if (textBox != null)
{
string speakText = textBox.SelectedText;
ReadAloud(speakText);
}
}
}

private void ReadDocument()
{
TextDocument doc = ActiveMdiChild as TextDocument;
if (doc != null)
{
RichTextBox textBox = doc.richTextBox1;
if (textBox != null)
{
string speakText = textBox.Text;
ReadAloud(speakText);
}
}
}

private void ReadAloud(string speakText)
{
try
{
SetVoice();
synthesizer.Speak(speakText);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

}

private void SetVoice()
{
try
{
synthesizer.SelectVoice(selectedVoice);
}
catch (Exception)
{
MessageBox.Show(selectedVoice + “\” is not available.);
}
}

#endregion

Two Kinds of Jargon


I had taken it for granted that “Web 2.0” is simply a lot of hype until I came across this defense of the term by Kathy Sierra by way of Steve Marx’s blog.  Kathy Sierra argues that “Web 2.0” is not simply a buzzword because it is, in fact, jargon.  She goes on to explore the notion of jargon and to explain why jargon is actually a good thing, and shamefully maligned.  This, I thought, certainly goes against the conventional wisdom. 


In my various careers, I have become intimately familiar with two kinds of jargon: academic jargon and software jargon.  I will discuss academic jargon first, and see if it sheds any light on software jargon.  The English word jargon is derived from the Old French word meaning “a chattering,” for instance of birds.  It is generally used somewhat pejoratively, as in this sentence from an article by George Packer in the most recent New Yorker concerning the efforts of anthropologists to make the “war on terror” more subtle as well as more culturally savvy:



One night earlier this year, Kilcullen sat down with a bottle of single-malt Scotch and wrote out a series of tips for company commanders about to be deployed to Iraq and Afghanistan.  He is an energetic writer who avoids military and social-science jargon, and he addressed himself intimately to young captains who have had to become familiar with exotica such as “The Battle of Algiers,” the 1966 film documenting the insurgency against French colonists.


 


In this passage, jargon is understood as a possibly necessary mode of professional language that, while it facilitates communication within a professional community, makes the dissemination of ideas outside of that community of speakers difficult.


Even with this definition, however, one can see how there is a sense in which the use of professional jargon is not a completely bad thing, but is in fact a trade-off.  While it makes speaking between professional communities difficult, as well as initiation into such a community difficult — for instance the initiation of young undergraduates into philosophical discourse–, once one is initiated into the argot of a professional community, the special language actually facilitates communication by serving as a short-hand for much larger concepts and by increasing the precision of the terms used within the community, since non-technical language tends to be ambiguous in a way that technical jargon, ideally, is not.  Take, for instance, the following sentences:



The question about that structure aims at the analysis of what constitutes existence. The context of such structures we call “existentiality“. Its analytic has the character of an understanding which is not existentiell, but rather existential. The task of an existential analytic of Dasein has been delineated in advance, as regards both its possibility and its necessity, in Dasein’s ontical constitution.


 


This passage is from the beginning of Martin Heidegger’s Being and Time, as translated by John Macquarrie and Edward Robinson.  To those unfamiliar with the jargon that Heidegger develops for his existential-phenomenology, it probably looks like balderdash.  One can see how potentially, with time and through reading the rest of this work, one might eventually come to understand Heidegger’s philosophical terms.  Jargon, qua jargon, is not necessarily bad, and much of the bad rap that jargon gets is often due to the resistance to comprehension and the sense of intellectual insecurity it engenders when one first encounters it.  Here is another example of jargon I pulled from a recent technical post on www.beyond3d.com called Origin of Quake3’s Fast InvSqrt():



The magic of the code, even if you can’t follow it, stands out as the i = 0x5f3759df – (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.


 


I don’t understand what the author of this passage is saying, but I do know that he is enthusiastic about it and assume that, as with the Heidegger passage, I can come to understand the gist of the argument given a week and a good reference work.  I also believe that the author is trying to say what he is saying in the most precise and concise way he is able, and this is why he resorts to one kind of  jargon to explain something that was originally written in an even more complicated technical language: a beautiful computer algorithm.


However there is another, less benign, definition for jargon that sees its primary function not in clarifying concepts, but in obfuscating them.  According to Theodor Adorno, in his devastating and unrelenting attack on Heidegger in The Jargon of Authenticity, jargon is “a sublanguage as superior language.”  For Adorno jargon, especially in Heidegger’s case, is an imposture and a con.  It is the chosen language of charlatans. Rudolf Carnap makes a similar, but not so brutal, point in section 5 of his “Overcoming Metaphysics” entitled “Metaphysical Pseudo-Sentences”, where he takes on Heidegger’s notorious sentence from Being and Time, “Das Nichts selbst nichtet” (Nothingness itself nothings) for its meaninglessness.


We might be tempted to try to save jargon from itself, then, by distinguishing two kinds of jargon: good jargon and bad jargon.  Turning to distinctions is at least as old as the use of jargon in order to clarify ideas, and goes back as far as, if not farther than, Phaedrus’s distinction between the heavenly and the common Aphrodites in Plato’s The Symposium.  With Phaedrus we can say that the higher and the baser jargon can be distinguished, as he distinguishes two kinds of love, by the intent of the person using jargon.  When jargon is used in order to clarify ideas and make them precise, then we are dealing with proper jargon.  When jargon is used, contrarily, to obfuscate, or to make the speaker seem smarter than he really is, then this is deficient or bad jargon.


There are various Virgin and the Whore problems with this distinction, however, not least of which is how to tell the two kinds of jargon apart.  It is in fact rather rare to find instances of bad jargon that everyone concedes is bad jargon, with the possible exception of hoaxes like the Sokal affair, in which physicist Alan Sokal wrote a jargon laden pseudo-paper about post-modernism and quantum mechanics, and got it published in a cultural studies journal.  Normally, however, when certain instances of jargon are identified as “bad” jargon, we also tend to find defenders who insist that it is not and claim that, to the contrary, those calling it bad jargon simply do not understand it.  This is a difficulty not unlike one which a wit described when asked to define bad taste.  “Bad taste,” he said, “is the garden gnome standing in my neighbors front lawn.”  When asked to define good taste, the wit continued, “Good taste is that plastic pink flamingo standing in my lawn.”


There are more difficulties with trying to distinguish good jargon from bad jargon, such as cases where good jargon becomes bad over time, or even cases where bad jargon becomes good.  Cases of the latter include Schopenhauer’s reading of a popular and apparently largely incorrect account of of Indian philosophy and then absorbing this into his own very insightful and influential philosophical project.  Georges Bataille’s misreading of Hegel and Jacques Lacan’s misreading of Freud also bore impressive fruit.  Finally, there’s the (probably apocryphal) story of the student of Italian who approached T.S. Eliot and began asking him about his peculiar and sometimes incorrect use of Italian in his poetry, until Eliot finally broke off the conversation with the admission, “Okay, you caught me.”   Cases such as these undermine the common belief that it is intent, or origins, which make a given jargon good or bad.


The opposite can, of course, also happen.  Useful jargon may, over time, become bad and obfuscating.  We might then say that while the terms used in Phenomenology proper are difficult but informative, they were corrupted when Heidegger took them up in his Existential-Phenomenology, or we might say that Heidegger’s jargon is useful but later philosophers influenced by his philosophy such as Derrida and the post-structuralists corrupted it, or finally we might even say that Derrida got it right but his epigones in America were the ones who ultimately turned his philosophical insights into mere jargon.  This phenomenon is what I take Martin Fowler to be referring to in his short bliki defense of the terms Web 2.0 and Agile entitled Semantic Diffusion.  According to Fowler:



Semantic diffusion occurs when you have a word that is coined a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely – and with it any usefulness to the term.


Thus Fowler takes up Kathy Sierra’s defense of Web 2.0 as jargon, recognizes some weaknesses in this explanation, and then fortifies the defense of the term with the further explanation that while the term may be problematic now, it was right in its origins, and pure in its intent.


Fowler here makes a remarkably Heideggerian observation.  Heidegger was somewhat obsessed with language and believed that language tends, over time, to hide and obfuscate meaning, when it should rather shed light on things.  Along this vein,  Being and Time begins with the claim that we today no longer understand the meaning of Being, and that this forgetting is so thorough that we are not even any longer aware of this absence of understanding, so that even the question “What Is Being?”, which should be the most important question for us, is for the most part ignored and overlooked.  To even begin understanding Being, then, we must first try to understand the meaning of the question of Being.  We must first come to the realization that there is even a problem there in the first place which needs to be resolved.  Heidegger’s chosen solution to this problem involves the claim that while language conceals meaning, it also, in its origins, is able to reveal it if we are able to come to understand language correctly.  He gives an example with the term aletheia, which in Greek means truth.  By getting to the origins of language and the experience of language, we can reveal aletheia. Aletheia, etymologically, means not-forgetting (thus the river Lethe is, in Greek mythology, the river of forgetting that the dead must cross before resting in Hades), and so the truth is implicitly an unconcealment that recovers the meanings implicit in language.  The authentic meaning of jargon, Fowler similarly claims, can be arrived at if we remove the accretions caused by “semantic diffusion” and get back to the original intent.


But is this true?  Do apologetics for terms such as “Web 2.0” and “Agile” insisting that they are “jargon” ultimately succeed?  Do such attempts reveal the original intent implicit in the coining of these terms or do they simply conceal the original meanings even further?


My personal opinion is that jargon, by its nature, never really reveals, but always in one way or another, by condensing thought and providing a shorthand for ideas, conceals.  It can of course be useful, but it can never be instructive, but rather gives us a sense that we understand things we do not understand simply because we know how to use a given jargon.  At best, jargon can be used as an indicator that points to a complex of ideas shared by a given community.  At worse, it is used as shorthand for bad or incoherent ideas that never themselves get critical treatment because the jargon takes the place of ideas, and becomes mistaken for ideas.  This particularly seems to be the case with the defense of “Web 2.0” and “Agile” as “jargon”, as if people have a problem with the terms themselves rather than what they stand for.  “Jargon”, as a technical term, is not particularly useful.  It is to some extent already corrupt from the get-go.


One way around this might be to simply stop using the term “jargon”, whether bad or good, when discussing things like Web 2.0 and Agile.  While it is common in English to use Latin derived terms for technical language and Anglo-Saxon words for common discourse, in this case we might be obliged to make the reverse movement as we look for an adequate replacement term for “jargon”.


In 2005, the Princeton philosopher Harry Frankfurt published a popular pamphlet called On Bullshit that attempts to give a philosophical explanation of the term. On first blush, this title may seem somewhat prejudicial, but I think that, as with jargon, if we get away from pre-conceived notions as to whether the term is good or bad, it will be useful as a way to get a fresh look at the term we are currently trying to evaluate, “Web 2.0”.  It can also be used most effectively if we do the opposite of what we did with “jargon”; jargon was first taken to appropriately describe the term “Web 2.0”, and then an attempt was made to understand what jargon actually was.  In this case, I want to first try to understand what bullshit is, and then see if it applies to “Web 2.0”.


Frankfurt begins his analysis with a brief survey of the literature on bullshit, which includes Max Black’s study of “humbug” and Augustine of Hippo’s analysis of lying.  From these, he concludes that bullshit and lying are different things, and as a preliminary conclusion, that bullshit falls just short of lying.  Moreover, he points out that it is all pervasive in a way that lying could never be.



The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.


Not satisfied with this preliminary explanation, however, Frankfurt identifies further elements that characterize bullshit, since there are many things that can fall short of a lie and yet, perhaps, not rise to the level of bullshit.  He then identifies inauthenticity as the hallmark that distinguishes bullshit from lies, on the one hand, and simple errors of fact, on the other.



For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.


 


It is not what a bullshitter says, then, that marks him as a bullshitter, but rather his state-of-mind when he says it.  For Frankfurt, bullshit doesn’t really even belong on the same continuum with truth and falsehood, but is rather opposed to both.  Like the Third Host in Dante’s Inferno, it is indifference to the struggle that ultimately identifies and marks out the class of bullshitters.


Again, there are echoes of Heidegger here.  According to Heidegger, we are all characterized by this “thrownness”, which is the essence of our “Being-In-The-World”.  In our thrownness, we do not recognize ourselves as ourselves, but rather as das Man, or as the they-self,



which we distinguish from the authentic Self – that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself.” And further “If Dasein discovers the world in its own way [eigens] and brings it close, if it discloses to itself its own authentic Being, then this discovery of the ‘world’ and this disclosure of Dasein are always accomplished as a clearing-away of concealments and obscurities, as a breaking up of the disguises with which Dasein bars its own way.


The main difference between Frankfurt’s and Heidegger’s analysis of authenticity, in this case, is that Frankfurt seems to take authenticity as normative, whereas Heidegger considers authenticity as the zero-point state of man when we are first thrown into the world.


For now, however, the difference isn’t all that important.  What is important is Frankfurt’s conclusion about the sources of bullshit.  At the end of his essay, Frankfurt in effect writes that there are two kinds of bullshit, one of which is defensible and one of which is not.  The indefensible kind of bullshit is based on a subjectivist view of the world which denies truth and falsity altogether (and here I take Frankfurt to be making a not too veiled attack on the relativistic philosophical disciplines that are based on Heidegger’s work).  The defensible form of bullshit — I hesitate to call it good bullshit — is grounded in the character of our work lives, which force us to work with and represent information that is by its nature too complex for us to digest and promulgate accurately.  This, I take it, is the circumstance academic lecturers and others frequently find themselves in, as the stand behind the podium and are obliged to talk authoritatively about subjects they do not feel up to giving a thorough, much less an authentic, account of.



Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.


 


This class of speech is the result of our inability to apply Wittgenstein’s dictum, “Whereof one cannot speak, thereof one must be silent.”  There are times when we are not in a position to remain silent, and so are obligated to bullshit.  Bullshit, in these cases, is a way of making the best of our situation.


Per the original arrangement, it is now time to put  “bullshit” to the test and see if either cynical bullshit or benign bullshit can be ascribed to the term “Web 2.0”.  For better or worse, I am going to use Jeffrey Zeldman’s blog on Web 2.0 (titled, confusingly enough, “Web 3.0”) as the main text for this analysis.  Zeldman is generally sympathetic to the ideas and phenomena the “Web 2.0” is meant to encompass, but he also points out the aspects of the term that grate.  The most salient is the degree to which it smells like a sales pitch.


 



It soon appeared that “Web 2.0” was not only bigger than the Apocalypse but also more profitable. Profitable, that is, for investors like the speaker. Yet the new gold rush must not be confused with the dot-com bubble of the 1990s:

“Web 1.0 was not disruptive. You understand? Web 2.0 is totally disruptive. You know what XML is? You’ve heard about well-formedness? Okay. So anyway—”

And on it ran, like a dentist’s drill in the Gulag.


Zeldman associates Web 2.0 with marketing, which Frankfurt in turn associates with bullshit.  Frankfurt even goes so far as identifying sales and its related disciplines as “the most indisputable and classic paradigms of the concept.”  Moreover, the defense that Web 2.0 describes a real phenomenon, as Fowler insists and Zeldman grants, doesn’t make it not bullshit, since Frankfurt concedes that bullshit can just as well be true as false.  What is important is the authenticity or inauthenticity of the original claim, and the sense that something is a sales pitch is already an indication that something inauthentic is going on.  So “Web 2.0” certainly meets Frankfurt’s criteria for bullshit.


The more important question is what kind of bullshit is it?  Is it benign, or cynical?  According to Frankfurt’s distinction, again, the difference is whether the bullshit is grounded in the nature of one’s work or rather in some sort of defect of epistemic character.


Here the answer is not so simple, I think, since software has two strands, one going to the hobbyist roots of programming, and the other to the monetizing potential of information technology.  Moreover, both strands tend to struggle within the heart of the software engineering industry, with the open source movement on the one hand (often cited as one key aspect of Web 2.0) emblematic of the purist strain, and the advertising prospects on the other (with Google in the vanguard, often cited as a key exemplar of the Web 2.0 phenomena) symbolic of the notion that a good idea isn’t enough — one also has to be able to sell one’s ideas.


Software programming, in its origins, is a discipline practiced by nerds.  In other words, it is esoteric knowledge, extremely powerful, practiced by a few and generally misunderstood by the majority of people.  As long as there is no desire to explain the discipline to outsiders, there is no problem with treating software programming as a hobby.  At some point, however, every nerd wants to be appreciated by people who are not his peers, and to accomplish this, he is forced to explain himself and ultimately to sell himself.  The turning point for this event is well documented, and occurred on February 3rd, 1973, when Bill Gates wrote an open letter to the hobbyist community stating that software had economic value and that it was time for people to start paying for it. 


This was a moment of triumph for nerds everywhere, though this was not at first understood, and still generates resentment to this day, because it irrevocably transformed the nature of software programming.  Once software was recognized as something of economic value, it also became clear that software concepts now had to be marketed.  The people who buy software are typically unable to distinguish good software from bad software, and so it becomes the responsibility of those who can to try to explain why their software is better in terms that are not, essentially, technical.  Instead, a hybrid jargon-ridden set of terms had to be created in order to bridge the gap between software and the business appetite for software.  Software engineers, in turn, learned to see selling themselves, to consumers, to their managers, and finally to their peers, as part of the job of software engineering — though at the same time, this forced obligation to sell themselves continues to be regarded with with suspicion and resentment.  The hope held out to such people is that through software they will eventually be able to make enough money, as Bill Gates did, as Steve Jobs did, to finally give up the necessity of selling themselves and return to a pure hobbyist state-of-mind once again.  They in effect want to be both the virgin and the whore.  This is, of course, a pipe dream.


Consequently, trying to determine whether Web 2.0 is benign bullshit or cynical bullshit is difficult, since sales both is and is not an authentic aspect of the work of software engineering.  What seems to be the case is that Web 2.0 is a hybrid of benign and cynical bullshit.  This schizophrenic character is captured in the notion of Web 2.0 itself, which is at the same time a sales pitch as well as an umbrella term for a set of contemporary cultural phenomena.


Now that we know what bullshit is, and we know that Web 2.0 is bullshit, it is time to evaluate what Web 2.0 is.  In Tim O’Reilly’s original article that introduced the notion of Web 2.0, called appropriately What Is Web 2.0, O’Reilly suggests several key points that he sees as typical of the sorts of things going on over the past year at companies such as Google, Flikr, YouTube and Wikipedia.  These observations include such slogans as “Harnessing Collective Intelligence”, “Data is the Next Intel Inside” and “End of the Software Release Cycle”.  But it is worth asking if these really tell us what Web 2.0 is, or if they are simply ad hoc attempts to give examples of what O’Reilly says is a common phenomenon?  When one asks for the meaning of terms such as Web 2.0, what one really wants is the original purpose behind coining the term.  What is implicit in the term Web 2.0, as Heidegger would put it, that at the same time is concealed by the language typically used to explain Web 2.0.


As Zeldman points out, one key (and I think the main key) to understanding Web 2.0 is that it isn’t Web 1.0.  The rise of the web was marked by a rise in bluff and marketing that created what we now look back on as the Internet Bubble.  The Internet Bubble, in turn, was a lot of marketing hype and the most remarkable stream of jargon used to build up a technology that, in the end, could not sustain the amount of expectation with which it was overloaded.  By 2005, this bad reputation that had accrued to the Web from the earlier mistakes had generated a cynicism about the new things coming along that really were worthwhile — such as the blogging phenomenon, Ajax, Wikipedia, Google, Flikr and YouTube.  In order to overcome the cynicism, O’Reilly coined a term that, successfully, distracted people from the earlier debacle and helped to make the Internet a place to invest money once again.  Tim O’Reilly, even if his term is bullshit, as we have already demonstrated above, ultimately has done us all a service by clearing out all the previous bullshit.  In a very Heideggerian manner, he made a clearing [Lichtung] for the truth to appear.  He created, much as Heidegger attempted to do for Being, a conceptual space for new ideas about the Internet to make themselves apparent.


Or perhaps my analogy is overwrought. In any case, the question still remains as to what one does with terms that have outlived their usefulness.  In his introduction to Existentialism and Human Emotions, Jean-Paul Sartre describes the status the term “existentialism” has achieved by 1957.



Someone recently told me of a lady who, when she let slip a vulgar word in a moment of irritation, excused herself by saying, “I guess I’m becoming an existentialist.”



Most people who use the word would be rather embarrassed if they had to explain it, since, now that the word is all the rage, even the work of a musician or painter is being called existentialist.  A gossip columnist in Clartes signs himself The Existentialist, so that by this time the word has been so stretched and has taken on so broad a meaning, that it no longer means anything at all.


 


Sartre spent most of the rest of his philosophical career refining and defending the term “existentialism,” until finally it was superceded by post-structuralism in France. The term enjoyed a second life in America, until post-structuralism finally made the Atlantic crossing and superceded it there, also, only in turn to be first treated with skepticism, then with hostility, and finally as mere jargon.  It is only over time that an intellectual clearing can be made to re-examine these concepts.  In the meantime, taking a cue from Wittgenstein, we are obliged to remain silent over them.

Long Dark Night of the Compiler


In his book on the development the C++ language, The Design and Evolution of C++, Bjarne Stroustrup says that in creating C++ he was influenced by the writings of Søren Kierkegaard.  He goes into some detail about it in this recent interview:


 



A lot of thinking about software development is focused on the group, the team, the company. This is often done to the point where the individual is completely submerged in corporate “culture” with no outlet for unique talents and skills. Corporate practices can be directly hostile to individuals with exceptional skills and initiative in technical matters. I consider such management of technical people cruel and wasteful. Kierkegaard was a strong proponent for the individual against “the crowd” and has some serious discussion of the importance of aesthetics and ethical behavior. I couldn’t point to a specific language feature and say, “See, there’s the influence of the nineteenth-century philosopher,” but he is one of the roots of my reluctance to eliminate “expert level” features, to abolish “misuses,” and to limit features to support only uses that I know to be useful. I’m not particularly fond of Kierkegaard’s religious philosophy, though.


 


Stroustrup is likely referring to philosophical observations such as this:


 



Truth always rests with the minority, and the minority is always stronger than the majority, because the minority is generally formed by those who really have an opinion, while the strength of a majority is illusory, formed by the gangs who have no opinion–and who, therefore, in the next instant (when it is evident that the minority is the stronger) assume its opinion . . . while Truth again reverts to a new minority.

— Søren Kierkegaard

 


Coincidentally, Kierkegaard and Pascal are often cited as the fathers of modern existentialism, and where Kierkegaard appears to have influenced the development of C++, Pascal’s name lives on in the Pascal programming language as well as the Pascal case, used as a stylistic device in most modern languages.  The Pascal language, in turn, was contemporary with the C language, which was the syntactic precursor to C++.


So just as the Catholic Church holds that guardian angels guide and watch over individuals, cities and nations, might it not also be the case that specific philosophers watch over different programming languages?  Perhaps a pragmatic philosopher like C. S. Peirce would watch over Visual Basic.  A philosopher fond of architectonics, like Kant, would watch over Eiffel.  John Dewey could watch over Java, while Hegel, naturally, would watch over Ruby.

Converting to ASP.NET Ajax Beta 2 (A Guide for the Perplexed)


There are a few good guides already on the internet that provide an overview of what is required to convert your Atlas CTP projects to Ajax Extensions.  This guide will probably not add anything new, but will hopefully consolidate some of the advice already provided, as well as offer a few pointers alluded to by others but not explained.  In other words, this is the guide I wish I had before I began my own conversion project.


1. The first step is to download install the Ajax Extensions beta 2 and the Ajax Futures (value added-) November CTP.  One problem I have heard of occurred when an associate somehow failed to remove his beta 1 dlls, and had various mysterious errors due to using the wrong version. 


2. Create a new Ajax Extensions project. This should provide you with the correct library references and the correct web configuration file.  Here are the minimum configuration settings needed for an ASP.Net Ajax website to work:



</configuration>


     <system.web>
     <pages>
     <controls>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
           <add tagPrefix=”asp” namespace=”Microsoft.Web.UI.Controls” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
           <add tagPrefix=”asp” namespace=”Microsoft.Web.Preview.UI” assembly=”Microsoft.Web.Preview”/>
     </controls>


     <compilation debug=”true”>
          <assemblies>
                <add assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35″/>
          </assemblies>
     </compilation>

</configuration>

 


You also need to make sure that you have a reference to the Microsoft.Web.Extensions dll as well as to the Microsoft.Web.Preview dll, if you intend to use features such as drag and drop or glitz. Both of these dlls should be registered in the GAC, although it wasn’t for me.  To make sure it was available in the GAC, I had to add a new registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders\ASP.NET AJAX 1.0.61025  with a default value indicating the location of the ASP.Net Ajax dlls: c:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025″


On a side note, there seems to currently be some ambiguity over whether the Microsoft.Web.Extensions dll can or cannot simply be placed in your bin folder rather than placed in the GAC.  It seems to work, even though the official documentation says it should not.


 


3. Wherever you used to use the shortcut “$” as a shorthand for “document.getElementsById“, you will now need to use “$get” .  I usually need to go through my Atlas code three or four times before I catch every intance of this and make the appropriate replacement.


 


4. Sys.Application.findControl(“myControl”) is now simplified to $find(“myControl”).


 


5. Wherever you used to use this.control.element, you now will use this.get_element().


 


6. The “atlas:” namespace has been replaced with the “asp:” namespace, so go through your code and make the appropriate replacements.  For example,



<atlas:ScriptManager ID=”ScriptManager1″ runat=”server”/>


is now



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”/>


 


7. Script References have changed.  The ScriptName attribute is now just the Name attribute.  The files that used to make up the optional ajax scripts are now broken out differently, and so if you need to use the dragdrop script file or the glitz script file, you now will also need to include PreviewScript javascript file.  This:



 


<atlas:ScriptManager ID=”ScriptManager1″ runat=”server”>
     <Scripts>
          <atlas:ScriptReference ScriptName=”AtlasUIDragDrop” />
          <atlas:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
     </Scripts>
</atlas:ScriptManager>


is now this:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
     <Scripts>
          <asp:ScriptReference Assembly=”Microsoft.Web.Preview” Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
          <asp:ScriptReference Assembly=”Microsoft.Web.Preview” Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
          <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
     </Scripts>
</asp:ScriptManager>


 


8. Namespaces have changed, and you may need to hunt around to find your classes.  For instance, Sys.UI.IDragSource is now Sys.Preview.UI.IDragSource, and for the most part you can probably get away with replacing all your Sys.UI namespaces with Sys.Preview.UI.  On the other hand, Sys.UI.Behavior has stayed where it is, so this is not always going to be the case.  The method setLoctation has also shifted namespaces.  It used to be found in Sys.UI.  It is now in Sys.UI.DomElement.


 


9. Xml Scripting change: Xml scripting, which allows you to use javascript in a declarative manner, is now part of the Value Added CTP.  As I understand it, the Value Added CTP, also known as Ajax Futures, includes lots of stuff originally included in the Atlas CTP but deemed to be of lower priority than the core Ajax Extensions features.  In order to meet a tough deadline, these have been set aside for now.  The Ajax Toolkit, in turn, is heavily dependent on these value added features, since the toolkit components tend to leverage the common javascript libraries such as Glitz much more than the specifically Ajax features provided with the core release.  The syntax for adding custom behaviors using Xml Scripting has changed, while the syntax for built in behaviors is the same.  An Xml Scripting region used to look like this:



 


<script type=”text/xml-script”>
   <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″>
      <components>
         <control id=”dropZone”>
           <behaviors>
               <DropZoneBehavior/>
           </behaviors>
         </control>
         <control id=”draggableDiv”>
           <behaviors>
             <floatingBehavior handle=”handleBar” />
           </behaviors>
         </control>
      </components>
  


Now it looks like this:


<script type=”text/xml-script”>
   <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″
xmlns:fooNamespace=”Custom.UI”>
      <components>
        <control id=”dropZone”>
          <behaviors>
            <fooNamespace:DropZoneBehavior/>
          </behaviors>
        </control>
      <control id=”draggableDiv”>
         <behaviors>
              <floatingBehavior handle=”handleBar” />
         </behaviors>
      </control>
    </components>
  </page>
</script>


Note: The AspNet AJAX CTP to Beta Whitepaper has a slightly different syntax, but this appears to be a typo, and the one I have provided above is the correct grammar.


10.  Adding behaviors using javascript has changed.  The biggest thing is that you no longer explicitly have to convert a DOM object to an ASP.Net Ajax object, as this is now done beneath the covers.  The get_behaviors().add(…) method has also been retired.  For my particular conversion, this code:



function addFloatingBehavior(ctrl, ctrlHandle){
     var floatingBehavior = new Sys.UI.FloatingBehavior();
     floatingBehavior.set_handle(ctrlHandle);
     var dragItem = new Sys.UI.Control(ctrl);
     dragItem.get_behaviors().add(floatingBehavior);
     floatingBehavior.initialize();
     }



got shortened to this:



function addFloatingBehavior(ctrl, ctrlHandle){
     var floatingBehavior = new Sys.Preview.UI.FloatingBehavior(ctrl);
     floatingBehavior.set_handle(ctrlHandle);
     floatingBehavior.initialize();
     }


This can in turn be shortened even further with the $create super function: 



function addFloatingBehavior(ctrl, ctrlHandle){


   $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);


}


 


11.  Closures and Prototypes:


You ought to convert javascript classes written as closures to classes written as prototypes.  Basically, instead of having private members, properties and methods all in the same place (called, it turns out, “closures”), they are now separated out into an initial definition that includes the members, and then a definition of the prototype that includes the various methods and properties, which are in turn rewritten using a slightly different grammar.  Here is a reasonably good overview of what the prototype object is used for.  Bertand LeRoy‘s two posts on closures and prototypes is also a good resource.


12. You basically follow the following steps to mechanically rewrite a closure as a prototype. First, change all your private variable declarations into public member declarations.  For instance, the following declaration:



var i = 0;


should now be:



this.i = 0;


 


Consolidate all of your members at the top and then place a close bracket after them to close your class definition.


13.  Start the first line of code to define your prototype.  For instance, in my dropzonebehavior class, I replaced this:



 Custom.UI.DropZoneBehavior = function() {
     Custom.UI.DropZoneBehavior.initializeBase(this);
     initialize: function(){
          Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
          // Register ourselves as a drop target.
          Sys.Preview.UI.DragDropManager.registerDropTarget(this);
          }


}


with this:



Custom.UI.DropZoneBehavior = function() {
       Custom.UI.DropZoneBehavior.initializeBase(this);
}



Custom.UI.DropZoneBehavior.prototype = {
     initialize: function(){
             Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
            // Register ourselves as a drop target.
            Sys.Preview.UI.DragDropManager.registerDropTarget(this); 
            }


}


simply by adding these two lines:



}



Custom.UI.DropZoneBehavior.prototype = {


 


14. Throughout the rest of the prototype definition, refer to your variables as members by adding


this.

in front of all of them.


 


15. Interfaces have changed.  The bahavior class, which did not used to take a parameter, now does:



Custom.UI.FloatingBehavior = function(value) {
    Custom.UI.FloatingBehavior.initializeBase(this,[value]);

}

 


16. Properties and methods are written differently in the prototype definition than they were in closures.  Wherever you have a method or property, you should rewrite it by getting rid of the preceding “this.” and replacing the equals sign in your method definition with a colon.  Finally, a comma must be inserted after each method or property definition except the last.  For example, this:



this.initialize = function() {
    Custom.UI.FloatingBehavior.callBaseMethod(this, ‘initialize’);

}


becomes this:


 



initialize: function() {
     Custom.UI.FloatingBehavior.callBaseMethod(this, ‘initialize’);

},


 


17. Type descriptors are gone.  This means you no longer need the getDescriptor method or the Sys.TypeDescriptor.addType call to register your Type Descriptor.  There is an alternate grammar for writing type descriptors using JSON, but my code worked fine without it.  I think it is meant for writing extenders.


 


18. Hooking up event handlers to DOM events has been simplified.  You used to need to define a delegate for the DOM event, and then use the attachEvent and detachEvent methods to link the delegate with your handler function.  In the beta 2, all of this is encapsulated and you will only need two super functions, $addHandler and $removeHandler.  You should probably place your $addHandler method to your initialize method, and $removeHandler to your dispose method.  The syntax for $addHandler will typically look like this:


$addHandler(this.get_element(), ‘mousedown’, YourMouseDownHandlerFunction)

$removeHandler takes the same parameters.  One thing worth noting is that, whereas the reference to the DOM event used to use the IE specific event name, in this case ‘onmousedown’, the designers of ASP.Net Ajax have now opted to use the naming convention adopted by Firefox and Safari. 


 


19. The last touch: add the following lines as the last bit of code in your script file:



if(typeof(Sys) !== “undefined”)
Sys.Application.notifyScriptLoaded();


You basically just need to do this.  It may even be one of the rare instances in programming where you don’t even need to know why you are doing it since, as far as I know, you will never encounter a situation where you won’t put it in your script.  My vague understanding of the reason, though, is that the ASP.Net Ajax page lifecycle needs to know when scripts are loaded; both IE and Firefox throw events when a page has completed loading.  Safari, however, does not.  notifyScriptLoaded() provides a common way to let all browsers know when scripts have been loaded and it is safe to work with the included classes and functions.


 


 


Bibliography (of sorts):


Here are the good guides I referred to at the top of this post: Bertrand LeRoy‘s post on javascript prototypes, Eilon Lipton‘s blog, the comments here: Scott Guthrie, Sean Burke‘s migration guide, Miljan Braticevic‘s experience with upgrading the Component Art tools.  The most comprehensive guide to using Ajax Extensions beta 2 is actually the upgrade guide provided by Microsoft Ajax Team here: AspNet AJAX CTP to Beta Whitepaper. I used the official online documentation, http://ajax.asp.net/docs/Default.aspx, mainly to figure out which namespaces to use and where the various functions I needed had been moved to.  Finally, using the search functionality on the ASP.Net Ajax forums helped me get over many minor difficulties.

V. ASP.NET Ajax Imperative Dropzones


 


To create dropzones using JavaScript instead of declarative script, just add the following JavaScript function to initialize your dropzone element with the custom dropzone behavior:


function addDropZoneBehavior(ctrl){

$create(Custom.UI.DropZoneBehavior, {}, null, null, ctrl);
}


To finish hooking everything up, call this addDropZoneBehavior function from the ASP.NET Ajax pageLoad() method, as you did in earlier examples for the addFloatingBehavior function.  This will attach the proper behaviors to their respective html elements and replicate the drag and dropzone functionality you created above using declarative markup.  If you want to make this work dynamically, just add the createDraggableDiv() function you already wrote for the previous dynamic example.  As a point of reference, here is the complete code for creating programmatic dropzones:



<%@ Page Language=”C#” %>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html xmlns=”http://www.w3.org/1999/xhtml” >
<head id=”Head1″ runat=”server”>
<title>Imperative Drop Targets</title>
<script type=”text/javascript”>
    function addFloatingBehavior(ctrl, ctrlHandle){
        $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);
    }
    function addDropZoneBehavior(ctrl){
        $create(Custom.UI.DropZoneBehavior, {}, null, null, ctrl);
    }
    function pageLoad(){
        addDropZoneBehavior($get(‘dropZone’));
        addFloatingBehavior($get(‘draggableDiv’),$get(‘handleBar’));
    }
</script>
</head>
<body>
<form id=”form1″ runat=”server”>
<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
            <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript” />
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop” />
        <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
    </Scripts>
</asp:ScriptManager>
<h2>Imperative Drop Targets with javacript</h2>
<div style=”background-color:Red;height:200px;width:200px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>
<div id=”dropZone” style=”background-color:cornflowerblue;height:200px;width:200px;”>Drop Zone</div>
</form>
</body>
</html>

 

Conclusion


Besides the dropzone behavior, you may want to also write your own floating behavior. For instance, by default, elements decorated with the floating behavior simply stay where you drop them. You may want to extend this so that your floating div will snap back to its original location when you drop it outside of a drop zone. Additionally, you may want to change the way the dragged element looks while you are dragging it, either by making it transparent, changing its color, or replacing the drag image altogether. All this can be accomplished by creating a behavior that implements the IDragSource interface in the same way you created a custom class that implements the IDropTarget interface.


This tutorial is for the most part a straight translation of the original Atlas tutorial that I wrote against the April CTP.  Even though many of the concepts behind Atlas are still retained in Ajax Extensions, some have changed by a turning of the screw so that what was once fitting and accurate in the original tutorial is no longer quite so.  For instance, whereas in the original Atlas tutorial I could talk about Xml Scripting and the rest of the ASP.NET Ajax functionality as one technology, they are now currently two varying technologies with different levels of support and interest for Microsoft.  There are more subtle differences that, I think, make the current version of the tutorial somewhat dated, as if I am saying everthing with a slight accent; in other words, while I stand by the accuracy of this tutorial, I think it has lost some of its original elegance in the translation.  I believe the tutorial will still be useful for those trying to get started with Microsoft’s Ajax implementation, though it’s chief utility, at this point, will probably be for people who were used to the Atlas way of doing things and need a point of reference to see how the semantics of the technology has changed. I hope the samples will help you over some of your growing pains, as writing it has helped me with mine.

IV. ASP.NET Ajax Declarative Dropzones


 



Being able to drag html elements around a page and have them stay where you leave them is visually interesting. To make this behavior truly useful, though, an event should be thrown when the drop occurs.  Furthermore, the event that is thrown should depend on where the drop occurs.  In other words, there needs to be behavior that can be added to a given html element that will turn it into a “dropzone” or a “drop target”,  the same way that the floating behavior can be added to an html div tag to turn it into a drag and drop element.

In the following examples, I will show how Atlas supports the concept of dropzones.  In its current state, Atlas does not support an out-of-the-box behavior for creating dropzone elements in quite the same way it does for floating elements.  It does, however, implement behaviors for a dragdroplist element and a draggablelistitem element which, when used together, allow you to create lists that can be reordered by dragging and dropping.  If you would like to explore this functionality some more, there are several good examples of using the dragDropList behavior on the web, for instance, Introduction to Drag And Drop with Atlas.

The main disadvantage of the dragdropzone behavior is that it only works with items that have been decorated with the DragDropList behavior. The functionality that this puts at your disposal is fairly specific. To get the sort of open-ended dropzone functionality I described above, that will also work with the predefined floating behavior, you will need to write your own dropzone behavior class in JavaScript. Fortunately, this is not all that hard.


Atlas adds several OOP extensions to JavaScript in order to make it more powerful, extensions such as namespaces, abstract classes, and interfaces. You will take advantage of these in coding up your own dropzone behavior. If you peer behind the curtain and look at the source code in the PreviewDragDrop.js file, (contained in the directory C:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025\ScriptLibrary\Debug), you will find several interfaces defined there, including one for Sys.UI.DragSource and one for Sys.UI.DropTarget. In fact, both the FloatingBehavior class and the DraggableListItem class implement the Sys.UI.DragSource interface, while Sys.UI.DropTarget is implemented by the DragDropList class. The code for these two interfaces looks like this:



Sys.Preview.UI.IDragSource = function Sys$Preview$UI$IDragSource() {
}


Sys.Preview.UI.IDragSource.prototype = {
      get_dragDataType: Sys$Preview$UI$IDragSource$get_dragDataType,
      getDragData: Sys$Preview$UI$IDragSource$getDragData,
      get_dragMode: Sys$Preview$UI$IDragSource$get_dragMode,
      onDragStart: Sys$Preview$UI$IDragSource$onDragStart,
      onDrag: Sys$Preview$UI$IDragSource$onDrag,
      onDragEnd: Sys$Preview$UI$IDragSource$onDragEnd
}
Sys.Preview.UI.IDragSource.registerInterface(‘Sys.Preview.UI.IDragSource’);

Sys.Preview.UI.IDropTarget = function Sys$Preview$UI$IDropTarget() {
}


Sys.Preview.UI.IDropTarget.prototype = {
     get_dropTargetElement: Sys$Preview$UI$IDropTarget$get_dropTargetElement,
     canDrop: Sys$Preview$UI$IDropTarget$canDrop,
     drop: Sys$Preview$UI$IDropTarget$drop,
     onDragEnterTarget: Sys$Preview$UI$IDropTarget$onDragEnterTarget,
     onDragLeaveTarget: Sys$Preview$UI$IDropTarget$onDragLeaveTarget,
     onDragInTarget: Sys$Preview$UI$IDropTarget$onDragInTarget
}
Sys.Preview.UI.IDropTarget.registerInterface(‘Sys.Preview.UI.IDropTarget’);


Why do you need to implement these interfaces instead of simply writing out brand new classes to support drag, drop, and dropzones? The secret is that, behind the scenes, a third class, called the DragDropManager, is actually coordinating the interactions between the draggable elements and the dropzone elements, and it only knows how to work with classes that implement the IDragSource or the IDropTarget. The DragDropManager class registers which dropzones are legitimate targets for each draggable element, handles the MouseOver events to determine when a dropzone has a draggable element over it, and a hundred other things you do not want to do yourself. In fact, it does it so well that the dropzone behavior you are about to write is pretty minimal. First, create a new JavaScript file called DropZoneBehavior.js. I placed my JavaScript file under a subdirectory called scriptLibrary, but this is not necessary in order to make the dropzone behavior work. Next, copy the following code into your file:



Type.registerNamespace(‘Custom.UI’);
Custom.UI.DropZoneBehavior = function(value) {
 Custom.UI.DropZoneBehavior.initializeBase(this, [value]);


}


Custom.UI.DropZoneBehavior.prototype = {
    initialize:  function() {
        Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
        // Register ourselves as a drop target.
        Sys.Preview.UI.DragDropManager.registerDropTarget(this);
        },
    dispose: function() {
        Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘dispose’);
        },
    getDescriptor: function() {
        var td = Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘getDescriptor’);
        return td;
        },
    // IDropTarget members.
    get_dropTargetElement: function() {
        return this.get_element();
        },
    drop: function(dragMode, type, data) {
        alert(‘dropped’);
        },
    canDrop: function(dragMode, dataType) {
        return true;
        },
    onDragEnterTarget: function(dragMode, type, data) {
        },
    onDragLeaveTarget: function(dragMode, type, data) {
        },
    onDragInTarget: function(dragMode, type, data) {
        }
}
Custom.UI.DropZoneBehavior.registerClass(‘Custom.UI.DropZoneBehavior’, Sys.UI.Behavior, Sys.Preview.UI.IDragSource, Sys.Preview.UI.IDropTarget, Sys.IDisposable);
if(typeof(Sys) != “undefined”) {Sys.Application.notifyScriptLoaded();}



I need to explain this class a bit backwards.  The first thing worth noticing is the second to last line that begins “Custom.UI.DropZoneBehavior.registerClass.”  This is where the dropZoneBehaviorClass defined above gets registered with Ajax Extensions.  The first parameter of the registerClass method takes the name of the class.  The second parameter takes the base class.  The remaining parameters take the interfaces that are implemented by the new class.  The line following this throws a custom event indicating that the script has completed loading (this is needed in order to support Safari, which does not do this natively).  Now back to the top, the “Type.registerNamespace” method allows you to register your custom namespace.  The next line declares our new class using an anonymous method syntax.  This is a way of writing JavaScript that I am not particularly familiar with, but is very important for making JavaScript object oriented, and is essential for designing Atlas behaviors.  Within the anonymous method, the class methods initialize, dispose, and getDescriptor are simply standard methods used for all behavior classes, and in this implementation, all you need to do is call the base method (that is, the method of the base class that you specify in the second to last line of this code sample.)  The only thing special you do is to register the drop target with the Sys.Preview.UI.DragDropManager in the initialize method.  This is the act that makes much of the drag drop magic happen.

Next, you implement the IDropTarget methods.  In this example, you are only implementing two methods, “this.canDrop” and “this.drop”.  For “canDrop”, you are just going to return true.  More interesting logic can be placed here to determine which floating div tags can actually be dropped on a given target, and even to determine what sorts of floating divs will do what when they are dropped, but in this case you only want a bare-bones implementation of  IDropTarget that will allow any floating div to be dropped on it.   Your implementation of the “drop” method is similarly bare bones.  When a floating element is dropped on one of your drop targets, an alert message will be thrown indicating that something has occurred.  And that’s about it.  You now have a drop behavior that works with the floating behavior we used in the previous examples.

You should now write up a page to show off your new custom dropzone behavior.  You can build on the previous samples to accomplish this.  In the Script Manager, besides registering the PreviewDragDrop script, you will also want to register your new DropZoneBehavior script:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript” />
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop” />
        <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
    </Scripts>
</asp:ScriptManager>


Next, you will want to add a new div tag to the HTML body, that can be used as a drop target:



<div style=”background-color:Red;height:200px;width:200px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>
<div id=”dropZone” style=”background-color:cornflowerblue;height:200px;width:200px;”>
    Drop Zone
</div>


Finally, you need to add a declarative markup element to add your custom DropZone behavior to the div you plan to use as a dropzone element. The XML markup should look like this:



<script type=”text/xml-script”>
    <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″ xmlns:JavaScript=”Custom.UI”>
<components>
<control id=”dropZone”>
                <behaviors>
                    <JavaScript:DropZoneBehavior/>
                </behaviors>
            </control>
<control id=”draggableDiv”>
                <behaviors>
                    <floatingBehavior handle=”handleBar”/>
                </behaviors>
            </control>
        </components>
    </page>
</script>


The code you have just written should basically add a drop zone to the original declarative drag and drop example.  When you drop your drag element on the drop zone, an alert message should now appear.  You can expand on this code to make the drop method of your custom dropzone behavior do much more interesting things, such as firing off other javascript events in the current page or even calling a webservice, using ASP.NET Ajax, that will in turn process server-side code for you. 

III. ASP.NET Ajax Dynamic Drag and Drop


 



Since the declarative model is much cleaner than the imperative model, why would you ever want to write your own javascript to handle Ajax Extensions behaviors?  You might want to roll your own javascript if you want to add behaviors dynamically.  One limitation of the declarative model is that you can only work with objects that are initially on the page.  If you start adding objects to the page dynamically, you cannot add the floating behavior to them using the declarative model.  With the imperative model, on the other hand, you can.

Building on the previous example, you will replace the “pageLoad()” function with a function that creates floating divs on demand.  The following javascript function will create a div tag with another div tag embedded to use as a handlebar, then insert the div tag into the current page, and finally add floating behavior to the div tag:


function createDraggableDiv() {
var panel= document.createElement(“div”);
panel.style.height=“100px”;
panel.style.width=“100px”;
panel.style.backgroundColor=“Blue”;
var panelHandle = document.createElement(“div”);
panelHandle.style.height=“20px”;
panelHandle.style.width=“auto”;
panelHandle.style.backgroundColor=“Green”;
panel.appendChild(panelHandle);
var target = $get(‘containerDiv’).appendChild(panel);
addFloatingBehavior(panel, panelHandle);
}

You will then just need to add a button to the page that calls the “createDraggableDiv()” function. The new HTML body should look something like this:


<input type=”button” value=”Add Floating Div” onclick=”createDraggableDiv();” />
<div id=”containerDiv” style=”background-color:Purple;height:800px;width:600px;”/>

This will allow you to add as many draggable elements to your page as you like, thus demonstrating the power and flexibility available to you once you understand the relationship between using Ajax Extensions declaratively and using it programmatically.  As a point of reference, here is the complete code for the dynamic drag and drop example:



<%@ Page Language=”C#”  %>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html xmlns=”http://www.w3.org/1999/xhtml” >
<head runat=”server”>
<title>Imperative Drag and Drop II</title>
<script type=”text/javascript”>
function createDraggableDiv() {
     var panel = document.createElement(“div”);
     panel.style.height=”100px”;
     panel.style.width=”100px”;
     panel.style.backgroundColor=”Blue”;
     var panelHandle = document.createElement(“div”);
     panelHandle.style.height=”20px”;
     panelHandle.style.width=”auto”;
     panelHandle.style.backgroundColor=”Green”;
     panel.appendChild(panelHandle);
     var target = $get(‘containerDiv’).appendChild(panel);
     addFloatingBehavior(panel, panelHandle);
     }
function addFloatingBehavior(ctrl, ctrlHandle){
     $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);
     }
</script>
</head>
<body>
<form id=”form1″ runat=”server”>
<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
<Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
 <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
</Scripts>
</asp:ScriptManager>
<h2>Imperative Drag and Drop Code with javascript: demonstrate dynamic loading of behaviors</h2>
<input type=”button” value=”Add Floating Div” onclick=”createDraggableDiv();” />
<div id=”containerDiv” style=”background-color:Purple;height:800px;width:600px;”/>
</form>
</body>
</html>

II. ASP.NET Ajax Imperative Drag and Drop


 



To accomplish the same thing using a programmatic model requires a bit more code, but not much more.  It is important to understand that when you add an Ajax Extensions Script Manager component to your page, you are actually giving instructions to have the Ajax Extensions javascript library loaded into your page.  The Ajax Extensions library, among other things, provides client-side classes that extend the DOM and provide you with tools that allow you to code in a browser agnostic manner (though there currently are still issues with Safari compatibility).  These client-side classes also allow you to add behaviors to your html elements.

To switch to an imperative model, you will need to replace the XML markup with two javascript functions.  The first one is generic script to add floating behavior to an html element.  It leverages the Ajax Extensions client-side classes to accomplish this:



<script type=”text/javascript”>
        function addFloatingBehavior(ctrl, ctrlHandle){
              $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);


              }
</script>



The function takes two parameter values; the html element that you want to make draggable, and the html element that is the drag handle for the dragging behavior.  The new $create function encapsulates the instantiation and initialization routines for the behavior.  The addFloatingBehavior utility function will be used throughout the rest of this tutorial.

Now you need to call the “addFloatingBehavior” function when the page loads.  This, surprisingly, was the hardest part about coding this example.  The Script Manager doesn’t simply create a reference to the Ajax Extensions javascript libraries, and I have read speculation that it actually loads the library scripts into the DOM.  In any case, what this means is that the libraries get loaded only after everything else on the page is loaded.  The problem for us, then, is that there is no standard way to make our code that adds the floating behavior run after the libraries are loaded; and if we try to run it before the libraries are loaded, we simply generate javascript errors, since all of the Ajax Extensions methods we call can’t be found.

There are actually a few workarounds for this, but the easiest one is to use a custom Ajax Extensions event called “pageLoad()” that  only gets called after the libraries are loaded.  To add the floating behavior to your div tag when the page is first loaded (but after the library scripts are loaded) you just need to write the following:


<script type=“text/javascript”>
function pageLoad(){
addFloatingBehavior(document.getElementById(‘draggableDiv’),
document.getElementById(‘handleBar’));
}
</script>

which, in turn, can be written this way, using an Ajax Extensions scripting shorthand that replaces “document.getElementById()” with “$get()“:


<script type=“text/javascript”>
function pageLoad(){
addFloatingBehavior($get(‘draggableDiv’),$get(‘handleBar’));
}
</script>

And once again, you have a draggable div that behaves exactly the same as the draggable div you wrote using the declarative model.

I. ASP.NET Ajax Declarative Drag and Drop

 


The first task is to use XML markup to add drag-drop behavior to a div tag. By drag and drop, I just mean the ability to drag an object and the have it stay wherever you place it.  The more complicated behavior of making an object actually do something when it is dropped on a specified drop target will be addressed later in this tutorial.  To configure your webpage to use ASP.NET Ajax, you will need to install the Microsoft.Web.Extensions.dll into your Global Assembly Cache.  You will also need a reference to the library Microsoft.Web.Preview.dll.  Finally, you will need to configure your web.config file with the following entry:



<system.web>
    <pages>
        <controls>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral,  PublicKeyToken=31bf3856ad364e35″ />
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI.Controls” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.Preview.UI” assembly=”Microsoft.Web.Preview” />
        </controls>
    </pages>
</system.web>


You will need to add an Atlas Script Manager control to your .aspx page and configure it to use the PreviewDragDrop library file:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
 <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
    </Scripts>
</asp:ScriptManager>


Add the div object you want to make draggable, and make sure it has a drag handle:



<div style=”background-color:Red;height:800px;width:600px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>


Finally, add the markup script that will make your div draggable:



<script type=”text/xml-script”>
    <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″>
        <components>
            <control id=”draggableDiv”>
                <behaviors>
                    <floatingBehavior handle=”handleBar”/>
                </behaviors>
            </control>
        </components>
    </page>
</script>


And with that, you should have a draggable div tag.  The example demonstrates the simplicity and ease of using the declarative model with Ajax Extensions.  In the terminology being introduced with Ajax Futures, you have just used declarative markup to add the floating behavior to an html element.