II. Imperative Drag Drop


To accomplish the same thing using a programmatic model requires a bit more code, but not much more. It is important to understand that when you add an Atlas Script Manager component to your page, you are actually giving instructions to have the Atlas JavaScript library loaded into your page. The Atlas library, among other things, provides client-side classes that extend the DOM, and provide you with tools that allow you to code in a browser agnostic manner (though there currently are still issues with Safari compatibility). These client-side classes also allow you to add behaviors to your HTML elements.

To switch to an imperative model, you will need to replace the XML markup with two JavaScript functions. The first one is a generic script to add floating behavior to an HTML element. It leverages the Atlas client-side classes to accomplish this:

<script type="text/javascript">
function addFloatingBehavior(ctrl, ctrlHandle){
    //create new floating behavior object
    var floatingBehavior = new Sys.UI.FloatingBehavior();
    //floating behavior class has a Handle property
    //convert object reference to Atlas client control
    var dragItem = new Sys.UI.Control(ctrl);
    //get behaviors collection from Atlas control 
    //and add our floating behavior
    //run the floating behavior's internal javascript

The function takes two parameter values: the HTML element that you want to make draggable, and the HTML element that is the drag handle for the dragging behavior. Next, you instantiate a new Atlas client-side behavior object. The floating behavior has a property called “handle“, to which you pass the handle HTML element. You then need to create a new client-side control object based on the control you want to make draggable. Turning your div tag into an Atlas client-side control enables you to add Atlas behaviors to it. You use the “get_behaviors()” method to return a collection of behaviors, and the add method to add a new behavior to your HTML object. Finally, you call the “initialize()" method of the behavior object to allow the behavior to configure itself internally. This utility function will be used throughout the rest of this tutorial.

Now, you need to call the “addFloatingBehavior” function when the page loads. This, surprisingly, was the hardest part about coding this example. The Script Manager doesn’t simply create a reference to the Atlas JavaScript libraries, and I have read speculation that it actually loads the library scripts into the DOM. In any case, what this means is that the libraries get loaded only after everything else on the page is loaded. The problem for us, then, is that there is no standard way to make our code that adds the floating behavior run after the libraries are loaded; and if we try to run it before the libraries are loaded, we simply generate JavaScript errors, since all of the Atlas methods we call can’t be found.

There are actually a few workarounds for this, but the easiest one is to use a custom Atlas event called “pageLoad()” that actually only gets called after the libraries are loaded. To add the floating behavior to your div tag, when the page is first loaded (but after the library scripts are loaded), you just need to write the following:

<script type="text/javascript">
    function pageLoad(){

which, in turn, can be written this way, using an Atlas script shorthand that replaces “document.getElementById()” with “$()”:

<script type="text/javascript">
    function pageLoad(){

And once again, you have a draggable div that behaves exactly the same as the draggable div you wrote using the declarative model.

I. Declarative Drag Drop

The first task is to use Atlas markup to add drag-drop behavior to a div tag. By drag and drop, I just mean the ability to drag an object and the have it stay wherever you place it. The more complicated behavior of making an object actually do something when it is dropped on a specified drop target will be addressed later in this tutorial. To configure your webpage to use Atlas, you will need to download the Microsoft.Web.Atlas.dll file from the Microsoft site into your bin folder and configure your web.config file with the following entry:

<add namespace=”Microsoft.Web.UI”
assembly=”Microsoft.Web.Atlas” tagPrefix=”atlas”/>
<add namespace=”Microsoft.Web.UI.Controls”
assembly=”Microsoft.Web.Atlas” tagPrefix=”atlas”/>

You will need to add an Atlas Script Manager control to your .aspx page and configure it to use the AtlasUIDragDrop library file:

<atlas:ScriptManager ID=”ScriptManager1″ runat=”server”>
<atlas:ScriptReference ScriptName=”AtlasUIDragDrop” />

Add the div object you want to make draggable, and make sure it has a drag handle:

<div style=”background-color:Red;height:800px;width:600px;”>
<div id=”draggableDiv”
<div id=”handleBar”

Finally, add the markup script that will make your div draggable:

<script type=”text/xml-script”>
    <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″>
            <control id=”draggableDiv”>
                    <floatingBehavior handle=”handleBar”/>

And with that, you should have a draggable div tag. The example demonstrates the simplicity and ease of using the declarative model with Atlas. In the terminology being introduced with Atlas, you have just used declarative markup to add floating behavior to an HTML element.

Microsoft Atlas Drag and Drop Tutorial

Source Code: AtlasDragNDrop.zip (467.5 KB)


I’m not sure if this explanation belongs in the preface or in the introduction, or what the exact difference is between the two.  In any case, this purpose of this preface is to explain why this tutorial is here.  I wrote it originally for a different site, codeproject.com, to which I have since lost my password.  At the same time, the underlying Atlas framework, upon which this tutorial is based and for which it is written, has gone through several iterations, thus obligating me to update this tutorial to make sure it is still valid.  So I’ve placed it here, in an easily accessible location, where I can quickly edit the text of this tutorial for as long as I can remember the password.  Who knows, should I find myself short of material again, I may start posting my journal article on Vico, grad school essays, and so on…


This tutorial is intended to help readers understand how certain aspects of Microsoft’s new Atlas technology works. Atlas is intended to simplify the development of AJAX-style functionality. As with all technologies, however, to use a tool well, it is important to understand the underlying technology that Atlas abstracts. One of the key Atlas abstractions is the new XML markup syntax developed to make coding with Atlas easier. With XML markup, developers can modify their code declaratively. However, there are times when a developer may want to be able to change her code programmatically, and in order to accomplish this, she will need to understand that underneath the markup abstraction, she is actually dealing with good ‘ol JavaScript and some custom JavaScript libraries developed by Microsoft. In order to demonstrate the relationship between the Atlas declarative model and the programmatic model, I will go through a series of examples in which the same task will be accomplished both declaratively and programmatically. I will be demonstrating how to use the AtlasUIDragDrop library file to perform basic drag-drop operations as well as set up drop zones.


As I write this, Atlas is still in beta, and continues to change. These examples apply to the April CTP of Atlas. Newer releases of Atlas may affect the accuracy of this tutorial. I will attempt to update the code as new versions of Atlas become available. Atlas only works with .NET 2.0.

Giulio Camillo, father of the Personal Computer

I am not the first to suggest it, but I will add my voice to those that want to claim that Giulio Camillo built the precursor of the modern personal computer in the 16th century.  Claiming that anyone invented anything is always a precarious venture, and it can be instructive to question the motives of such attempts.  For instance, trying to determine whether Newton or Leibniz invented calculus is a simple question of who most deserves credit for this remarkable achievement. 

Sometimes the question of firsts is intended to reveal something that we did not know before, such as Harold Bloom’s suggestion that Shakespeare invented the idea of personality as we know it.  In making the claim, Bloom at the same time makes us aware of the possibility that personality is not in fact something innate, but something created.  Edmund Husserl turns this notion on its head a bit with his reference in his writings to the Thales of Geometry.  Geometry, unlike the notion of personality, cannot be so easily reduced to an invention, since it is eidetic in nature.  It is always true, whether anyone understands geometry or not.  And so there is a certain irony in holding Thales to be the originator of Geometry since Geometry is a science that was not and could not have been invented as such.  Similarly, each of us, when we discover the truths of geometry for ourselves, becomes in a way a new Thales of Geometry, having made the same observations and realizations for which Thales receives credit. 

Sometimes the recognition of firstness is a way of initiating people into a secret society.  Such, it struck me, was the case when I read as a teenager from Stephen J. Gould that Darwin was not the first person to discover the evolutionary process, but that it was in fact another naturalist named Alfred Russel Wallace, and suddenly a centuries long conspiracy to steal credit from the truly deserving Wallace was revealed to me — or so it had seemed at the time. 

Origins play a strange role in etymological considerations, and when we read Aristotle’s etymological ruminations, there is certainly a sense that the first meaning of a word will somehow provide the key to understanding the concepts signified by the word.  There is a similar intuition at work in the discusions of ‘natural man’ to be found in the political writings of Hobbes, Locke and Rousseau.  For each, the character of the natural man determines the nature of the state, and consequently how we are to understand it best.  For Hobbes, famously, the life of this kind of man is rather unpleasant.  For Locke, the natural man is typified by his rationality.  For Rousseau, by his freedom.  In each case, the character of the ‘natural man’ serves as a sort of gravitational center for understanding man and his works at any time. I have often wondered whether the discussions of the state of the natural man were intended as a scientific deduction or rather merely as a metaphor for each of these great political philosophers.  I lean toward the latter opinion, in which case another way to understand firsts is not as an attempt to achieve historical accuracy, but rather an attempt to find the proper metaphor for something modern.

So who invented the computer?  Was it Charles Babbage with his Difference Engine in the 19th century, or Alan Turing in the 20th with his template for the Universal Machine?  Or was it Ada Lovelace, as some have suggested, the daughter of Lord Byron and collaborator with Charles Babbage who possibly did all the work while Babbage receives all the credit?

My question is a simpler one: who invented the personal computer, Steve Jobs or Giulio Camillo.  I award the laurel to the Camillo, who was known in his own time as the Divine Camillo because of the remarkable nature of his invention.  And in doing so, of course, I merely am attempting to define what the personal computer really is — the gravitational center that is the role of the personal computer in our lives.

Giulio Camillo spent long years working on his Memory Theater, a miniaturized Vitruvian theater still big enough to walk into, basically a box, that would provide the person who stood before it the gift most prized by Renaissance thinkers: the eloquence of Cicero.  The theater itself was arranged with images and figures from greek and roman mythology.  Throughout it were Christian references intermixed with Hermetic and Kabalistic symbols.  In small boxes beneath various statues inside the theater fragments and adaptations of Cicero’s writings could be pulled out and examined.  Through the proper physical arrangment of the fantastic, the mythological, the philosophical and the occult, Camillo sought to provide a way for anyone who stepped before his theater be able to discourse on any subject no less fluently and eloquently than Cicero himself.

Eloquence in the 16th century was understood as not only the ability of the lawyer or statesman to speak persuasively, but also the ability to evoke beautiful and accurate metaphors, the knack for delighting an audience, the ability to instruct, and mastery of diverse subjects that could be brought forth from the memory in order to enlighten one’s listeners.  Already in Camillo’s time, mass production of books was coming into its own and creating a transformation of culture.  Along with it, the ancient arts of memory and of eloquence (by way of analogy we might call it literacy, today), whose paragon was recognized to be Cicero, was in its decline.  Thus Camillo came along at the end of this long tradition of eloquence to invent a box that would capture all that was best of the old world that was quickly disappearing.  He created, in effect, an artificial memory that anyone could use, simply by stepping before it, to invigorate himself with the accumulated eloquence of all previous generations.

And this is how I think of the personal computer.  It is a box, occult in nature, that provides us with an artificial memory to make us better than we are, better than nature made us.  Nature distributes her gifts randomly, while the personal computer corrects that inherent injustice.  The only limitation to the personal computer, as I see it, is that it can only be a repository for all the knowledge humanity has already acquired.  It cannot generate anything new, as such.  It is a library and not a university.

Which is where the internet comes in.  The personal computer, once it is attached to the world wide web, becomes invigorated by the chaos and serendipity that is the internet.  Not only do we have the dangerous chaos of viruses and trojan horses, but also the positive chaos of online discussions, the putting on of masks and mixing with the online personas of others, the random following of links across the internet that ultimately leads us to make new connections between disparate concepts in ways that seem natural and inevitable.

This leads me to the final connection I want to make in my overburdened analogy.  Just as the personal computer is not merely a box, but also a doorway to the internet, so Giulio Camillo’s Theater of Memory was tied to a Neoplatonic worldview in which the idols of the theater, if arranged properly and fittingly, could draw down the influences of the intelligences, divine beings known variously as the planets (Mars, Venus, etc.), the Sephiroth, or the Archangels.  By standing before Camillo’s box, the spectator was immediately plugged into these forces, the consequences of which are difficult to assess.  There is danger, but also much wonder, to be found on the internet.

The Bond Martini


We all know that James Bond drinks his martinis “shaken, not stirred.”  In the first Bond novel by Ian Fleming, we are actually given directions for making a very large martini, which Bond invents and later dubs ‘The Vesper,’ after Vesper Lynd, the heroine of Casino Royale


Bond insisted on ordering Leiter’s Haig-and-Haig ‘on the rocks’ and then he looked carefully at the barman.

‘A dry martini,’ he said. ‘One. In a deep champagne goblet.’

‘Oui, monsieur.’

‘Just a moment. Three measures of Gordon’s, one of vodka, half a measure of Kina Lillet.  Shake it very well until it’s ice-cold, then add a large thin slice of lemon-peel. Got it?’

‘Certainly, monsieur.’ The barman seemed pleased with the idea.

‘Gosh, that’s certainly a drink,’ said Leiter.

Bond laughed. ‘When I’m … er … concentrating,’ he explained, ‘I never have more than one drink before dinner. But I do like that one drink to be large and very strong and very cold and very well-made. I hate small portions of anything, particularly when they taste bad. This drink’s my own invention. I’m going to patent it when I can think of a good name.’

He watched carefully as the deep glass became frosted with the pale golden drink, slightly aerated by the bruising of the shaker. He reached for it and took a long sip.

‘Excellent,’ he said to the barman, ‘but if you can get a vodka made with grain instead of potatoes, you will find it still better.’

‘Mais n’enculons pas des mouches,’ he added in an aside to the barman. The barman grinned.

‘That’s a vulgar way of saying “we won’t split hairs”,’ explained Bond.

I and Thou

Few of us are ever afforded the opportunity to read so thoroughly and deeply as we once did in college.  I am occassionally nostalgic for that period of my life, thinking fondly and even enviously of the boundless amount of leisure time I once had, but did not seem to appreciate.  In fact, the overwhelming mood that pervaded my experience of that time was not one of leisure, but rather one of impatience.  A fellow student remarked to me,  while we were both participants in a seminar with a  longish reading list, that she wished deeply to be free of the burden of having all these books still to read.  And I suggested that while the reading would indeed be difficult, it would be a great pleasure to one day know that we had mastered these books.  She corrected me.  Her desire was peculiar.  She wanted through some miraculous conveyance to have had already read these books, and to have had already mastered them, rather than to have them before her as a large unknown that only emphasized, by contrast, our state of relative ignorance.  She wanted, as many people have in the long history of man, to possess instant wisdom.  Upon hearing her describe this desire, I realized that I wanted this also.

But all that was said before google and wikipedia.  The internet has become, for many, the royal road to wisdom — or at least a simulacrum of wisdom.  In virtual communities across the world wide web, arguments are now settled by appealing to wikipedia.  Despite the well publicized problems with its reliability, wikipedia is still more convenient than looking things up in a book or relying on a shared education.  When one is in want of an argument, political blogs are often the best place to go to find out how one should best defend a given viewpoint.  Not only are arguments provided for any topical issue, but even rhetorical flourishes are provided.  Nor does one have to cite the source of one’s arguments.  Instead one can simply state them as one’s own original viewpoint — they are in a way open source opinions.  Finally, when one needs to appear wiser than one actually is, for the sake of one’s virtual community, google readily points to facts, viewpoints, timelines, interpretations on almost any subject, generally in an easily digestible form.  And again, citations are completely unnecessary.

These wonderful tools: the internet, google, wikipedia, and the blogosphere,  can be used as if they are an extension of our memories.  They offer that for which my college friend longed, the ability to possess knowledge, as well as the perspectives and opinions which come with that knowledge, without the laborious effort that once was required to actually gain this knowledge.  While there may still be some bragging rights associated with having gained knowledge through one’s own effort, it is fair to ask what is the real difference between this new form of wisdom and the traditional kind that had previously been so difficult to attain?  Is it anything more than one of loci: whether the wisdom is contained within our souls or accessed from outside our skulls?

One can already feel the impatience with the state of the technology.  As fast as the internet is, it still does not match the speed with which one can recall information from the internal memory.  And while a search of one’s recollections is often as uncanny as Plato’s aviary it still tends to be less tendentious, if also not so extensive, as a google search.  The technology, however, much like we once said about the mind, can be improved.  As the interface between our minds and the cavernous stores of memory available on the interenet improves, the separation we perceive between the two may eventually become a mere figment, a virtual oddity future generations will read about on a future wikipedia to better understand how people used to think.

Where to Begin?


But in the night of thick darkness enveloping the earliest antiquity, so remote from ourselves, there shines the eternal and never failing light of a truth beyond all question: that the world of civil society has certainly been made by men, and that its principles are therefore to be found within the modifications of our own human mind. Whoever reflects on this cannot but marvel that the philosophers should have bent all their energies to the study of the world of nature, which, since God made it, He alone knows; and that they should have neglected the study of the world of nations, or civil world, which, since men had made it, men could come to know. This aberration was a consequence of that infirmity of the human mind by which, immersed and buried in the body, it naturally inclines to take notice of bodily things, and finds the effort to attend to itself too laborious; just as the bodily eye sees all objects outside itself but needs a mirror to see itself. — Giambattista Vico, The New Science

Authentically Virtual