Surrendering to Technology


My wife pulled into the garage yesterday after a shopping trip and called me out to her car to catch the tail end of a vignette on NPR about the Theater of Memory tradition Frances Yates rediscovered in the 60’s — a subject my wife knows has been a particular interest of mine since graduate school.  The radio essayist was discussing his attempt to create his own memory theater by forming the image of a series of rooms in his mind and placing strange mnemonic creatures representing different things he wanted to remember in each of the corners.  Over time, however, he finally came to the conclusion that there was nothing in his memory theater that he couldn’t find on the Internet and, even worse, his memory theater had no search button.  Eventually he gave up on the Renaissance theater of memory tradition and replaced it with Google.


I havn’t read Yates’s The Art of Memory for a long time, but it seemed to me that the guy on the radio had gotten it wrong, somehow.  While the art of memory began as a set of techniques allowing an orator to memorize topics about which he planned to speak, often for hours, over time it became something else.  The novice rhetorician would begin by spending a few years memorizing every nook and cranny of some building until he was able to recall every aspect of the rooms simply by closing his eyes.  Next he would spend several more years learning the techniques to build mnemonic images which he would then place in different stations of his memory theater in preparation for an oration.  The rule of thumb was that the most memorable images were also the most outrageous and monstrous.  A notable example originating in the Latin mnemonic textbook Ad Herennium is a ram’s testicles used as a place holder for a lawsuit, since witnesses must testify in court, and testify sounds like testicles.


As a mere technique, the importance of the theater of memory waned with the appearance of cheap paper as a new memory technology.  Instead of working all those years to make the mind powerful enough to remember a multitude of topics, topics can now be written down on paper and recalled as we like.  The final demise of the theater of memory is no doubt realized in the news announcer who reads off a teleprompter, being fed words to say as if they were being drawn from his own memory.  This is of course an illusion, and the announcer is merely a host for the words that flow through him.


A variation on the theater of memory not obviated by paper began to be formulated in the Renaissance in the works of men like Marsilio Ficino, Giulio Camillo, Giordano Bruno, Raymond Lull, and Peter Ramus.  Through them, the theater of memory was integrated with the Hermetic tradition, and the mental theater was transformed into something more than a mere technique for remembering words and ideas.  Instead, the Hermetic notion of the microcosm and macrocosm, and the sympathetic rules that could connect the two, became the basis for seeing the memory theater as a way to connect the individual with a world of cosmic and magical forces.  By placing objects in the memory theater that resonate with the celestial powers, the Renaissance magus was able to call upon these forces for insight and wisdom.


Since magic is not real, even these innovations are not so interesting on their own.  However the 18th century thinker, Giambattista Vico, both a rationalist and someone steeped in the traditions of Renaissance magic, recast the theater of memory one more time.  For Vico, the memory theater was not a repository for magical artifacts, but rather something that is formed in each of us through acculturation; it contains a knowledge of the cultural institutions, such as property rights, marriage, and burial (the images within our memory theaters), that are universal and make culture possible.  Acculturation puts these images in our minds and makes it possible for people to live together.  As elements of our individual memory theaters, these civilizing institutions are taken to be objects in the world, when in actuality they are images buried so deeply in our memories that they exert a remarkable influence over our behavior. 


Some vestige of this notion of cultural artifacts can be found in Richard Dawkins’s hypothesis about memes as units of culture.  Dawkins suggests that our thoughts are  made up, at least in part, of memes that influence our behavior in irrational but inexorable ways.  On analogy with his concept of genes as selfish replicators, he conceives of memes as things seeking to replicate themselves based on rules that are not necessarily either evident or rational.  His examples include, at the trivial end, songs that we can’t get out of our heads and, at the profound end, the concept of God.  For Dawkins, memes are not part of the hardwiring of the brain, but instead act like computer viruses attempting to run themselves on top of the brain’s hardware.


One interesting aspect of Dawkins’s interpretation of the spread of culture is that it also offers an explanation for the development of subcultures and fads.  Subcultures can be understood as communities that physically limit the available vectors for the spread of memes to certain communities, while fads can be explained away as short-lived viruses that are vital for a while but eventually waste their energies and disappear.  The increasing prevalence of visual media and the Internet, in turn, increase the number of vectors for the replication of memes, just as increased air-travel improves the ability of real diseases to spread across the world.


Dawkins describes the replication of memetic viruses in impersonal terms.  The purpose of these viruses is not to advance culture in any way, but rather simply to perpetuate themselves.  The cultural artifacts spread by these viruses are not guaranteed to improve us, no more than Darwinian evolution offers to make us better morally, culturally or intellectually.  Even to think in these terms is a misunderstanding of the underlying reality. Memes do not survive because we judge them to be valuable.  Rather, we deceive ourselves into valuing them because they survive. 


How different this is from the Renaissance conception of the memory theater, for which the theater existed to serve man, instead of man serving simply to host the theater.  Ioan Couliano, in the 80’s, attempted to disentangle Renaissance philosophy from its magical trappings to show that at its root the Renaissance manipulation of images was a proto-psychology.  The goal of the Hermeticist was to cultivate and order images in order to improve both mind and spirit.  Properly arranged, these images would help him to see the world more clearly, and allow him to live in it more deeply.


For after all what are we but the sum of our memories?  A technique for forming and organizing these memories — to actually take control of our memories instead of simply allowing them to influence us willy-nilly — such as the Renaissance Hermeticists tried to formulate could still be of great use to us today.  Is it so preposterous that by reading literature instead of trash, by controlling the images and memories that we allow to pour into us, we can actually structure what sort of persons we are and will become?


These were the ideas that initially occurred to me when I heard the end of the radio vignette while standing in the garage.  I immediately went to the basement and pulled out Umberto Eco’s The Search For The Perfect Language, which has an excellent chapter in it called Kabbalism and Lullism in Modern Culture that seemed germane to the topic.  As I sat down to read it, however, I noticed that Doom, the movie based on a video game, was playing on HBO, so I ended up watching that on the brand new plasma TV we bought for Christmas.


The premise of the film is that a mutagenic virus (a virus that creates mutants?) is found on an alien planet that starts altering the genes of people it infects and turns them into either supermen or monsters depending on some predisposition of the infected person’s nature.  (There is even a line in the film explaining that the final ten percent of the human genome that has not been mapped is believed to be the blueprint for the human soul.)  Doom ends with “The Rock” becoming infected and having to be put down before he can finish his transformation into some sort of malign creature.  After that I pulled up the NPR website in order to do a search on the essayist who abandoned his memory theater for Google.  My search couldn’t find him.

Two Kinds of Jargon


I had taken it for granted that “Web 2.0” is simply a lot of hype until I came across this defense of the term by Kathy Sierra by way of Steve Marx’s blog.  Kathy Sierra argues that “Web 2.0” is not simply a buzzword because it is, in fact, jargon.  She goes on to explore the notion of jargon and to explain why jargon is actually a good thing, and shamefully maligned.  This, I thought, certainly goes against the conventional wisdom. 


In my various careers, I have become intimately familiar with two kinds of jargon: academic jargon and software jargon.  I will discuss academic jargon first, and see if it sheds any light on software jargon.  The English word jargon is derived from the Old French word meaning “a chattering,” for instance of birds.  It is generally used somewhat pejoratively, as in this sentence from an article by George Packer in the most recent New Yorker concerning the efforts of anthropologists to make the “war on terror” more subtle as well as more culturally savvy:



One night earlier this year, Kilcullen sat down with a bottle of single-malt Scotch and wrote out a series of tips for company commanders about to be deployed to Iraq and Afghanistan.  He is an energetic writer who avoids military and social-science jargon, and he addressed himself intimately to young captains who have had to become familiar with exotica such as “The Battle of Algiers,” the 1966 film documenting the insurgency against French colonists.


 


In this passage, jargon is understood as a possibly necessary mode of professional language that, while it facilitates communication within a professional community, makes the dissemination of ideas outside of that community of speakers difficult.


Even with this definition, however, one can see how there is a sense in which the use of professional jargon is not a completely bad thing, but is in fact a trade-off.  While it makes speaking between professional communities difficult, as well as initiation into such a community difficult — for instance the initiation of young undergraduates into philosophical discourse–, once one is initiated into the argot of a professional community, the special language actually facilitates communication by serving as a short-hand for much larger concepts and by increasing the precision of the terms used within the community, since non-technical language tends to be ambiguous in a way that technical jargon, ideally, is not.  Take, for instance, the following sentences:



The question about that structure aims at the analysis of what constitutes existence. The context of such structures we call “existentiality“. Its analytic has the character of an understanding which is not existentiell, but rather existential. The task of an existential analytic of Dasein has been delineated in advance, as regards both its possibility and its necessity, in Dasein’s ontical constitution.


 


This passage is from the beginning of Martin Heidegger’s Being and Time, as translated by John Macquarrie and Edward Robinson.  To those unfamiliar with the jargon that Heidegger develops for his existential-phenomenology, it probably looks like balderdash.  One can see how potentially, with time and through reading the rest of this work, one might eventually come to understand Heidegger’s philosophical terms.  Jargon, qua jargon, is not necessarily bad, and much of the bad rap that jargon gets is often due to the resistance to comprehension and the sense of intellectual insecurity it engenders when one first encounters it.  Here is another example of jargon I pulled from a recent technical post on www.beyond3d.com called Origin of Quake3’s Fast InvSqrt():



The magic of the code, even if you can’t follow it, stands out as the i = 0x5f3759df – (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.


 


I don’t understand what the author of this passage is saying, but I do know that he is enthusiastic about it and assume that, as with the Heidegger passage, I can come to understand the gist of the argument given a week and a good reference work.  I also believe that the author is trying to say what he is saying in the most precise and concise way he is able, and this is why he resorts to one kind of  jargon to explain something that was originally written in an even more complicated technical language: a beautiful computer algorithm.


However there is another, less benign, definition for jargon that sees its primary function not in clarifying concepts, but in obfuscating them.  According to Theodor Adorno, in his devastating and unrelenting attack on Heidegger in The Jargon of Authenticity, jargon is “a sublanguage as superior language.”  For Adorno jargon, especially in Heidegger’s case, is an imposture and a con.  It is the chosen language of charlatans. Rudolf Carnap makes a similar, but not so brutal, point in section 5 of his “Overcoming Metaphysics” entitled “Metaphysical Pseudo-Sentences”, where he takes on Heidegger’s notorious sentence from Being and Time, “Das Nichts selbst nichtet” (Nothingness itself nothings) for its meaninglessness.


We might be tempted to try to save jargon from itself, then, by distinguishing two kinds of jargon: good jargon and bad jargon.  Turning to distinctions is at least as old as the use of jargon in order to clarify ideas, and goes back as far as, if not farther than, Phaedrus’s distinction between the heavenly and the common Aphrodites in Plato’s The Symposium.  With Phaedrus we can say that the higher and the baser jargon can be distinguished, as he distinguishes two kinds of love, by the intent of the person using jargon.  When jargon is used in order to clarify ideas and make them precise, then we are dealing with proper jargon.  When jargon is used, contrarily, to obfuscate, or to make the speaker seem smarter than he really is, then this is deficient or bad jargon.


There are various Virgin and the Whore problems with this distinction, however, not least of which is how to tell the two kinds of jargon apart.  It is in fact rather rare to find instances of bad jargon that everyone concedes is bad jargon, with the possible exception of hoaxes like the Sokal affair, in which physicist Alan Sokal wrote a jargon laden pseudo-paper about post-modernism and quantum mechanics, and got it published in a cultural studies journal.  Normally, however, when certain instances of jargon are identified as “bad” jargon, we also tend to find defenders who insist that it is not and claim that, to the contrary, those calling it bad jargon simply do not understand it.  This is a difficulty not unlike one which a wit described when asked to define bad taste.  “Bad taste,” he said, “is the garden gnome standing in my neighbors front lawn.”  When asked to define good taste, the wit continued, “Good taste is that plastic pink flamingo standing in my lawn.”


There are more difficulties with trying to distinguish good jargon from bad jargon, such as cases where good jargon becomes bad over time, or even cases where bad jargon becomes good.  Cases of the latter include Schopenhauer’s reading of a popular and apparently largely incorrect account of of Indian philosophy and then absorbing this into his own very insightful and influential philosophical project.  Georges Bataille’s misreading of Hegel and Jacques Lacan’s misreading of Freud also bore impressive fruit.  Finally, there’s the (probably apocryphal) story of the student of Italian who approached T.S. Eliot and began asking him about his peculiar and sometimes incorrect use of Italian in his poetry, until Eliot finally broke off the conversation with the admission, “Okay, you caught me.”   Cases such as these undermine the common belief that it is intent, or origins, which make a given jargon good or bad.


The opposite can, of course, also happen.  Useful jargon may, over time, become bad and obfuscating.  We might then say that while the terms used in Phenomenology proper are difficult but informative, they were corrupted when Heidegger took them up in his Existential-Phenomenology, or we might say that Heidegger’s jargon is useful but later philosophers influenced by his philosophy such as Derrida and the post-structuralists corrupted it, or finally we might even say that Derrida got it right but his epigones in America were the ones who ultimately turned his philosophical insights into mere jargon.  This phenomenon is what I take Martin Fowler to be referring to in his short bliki defense of the terms Web 2.0 and Agile entitled Semantic Diffusion.  According to Fowler:



Semantic diffusion occurs when you have a word that is coined a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely – and with it any usefulness to the term.


Thus Fowler takes up Kathy Sierra’s defense of Web 2.0 as jargon, recognizes some weaknesses in this explanation, and then fortifies the defense of the term with the further explanation that while the term may be problematic now, it was right in its origins, and pure in its intent.


Fowler here makes a remarkably Heideggerian observation.  Heidegger was somewhat obsessed with language and believed that language tends, over time, to hide and obfuscate meaning, when it should rather shed light on things.  Along this vein,  Being and Time begins with the claim that we today no longer understand the meaning of Being, and that this forgetting is so thorough that we are not even any longer aware of this absence of understanding, so that even the question “What Is Being?”, which should be the most important question for us, is for the most part ignored and overlooked.  To even begin understanding Being, then, we must first try to understand the meaning of the question of Being.  We must first come to the realization that there is even a problem there in the first place which needs to be resolved.  Heidegger’s chosen solution to this problem involves the claim that while language conceals meaning, it also, in its origins, is able to reveal it if we are able to come to understand language correctly.  He gives an example with the term aletheia, which in Greek means truth.  By getting to the origins of language and the experience of language, we can reveal aletheia. Aletheia, etymologically, means not-forgetting (thus the river Lethe is, in Greek mythology, the river of forgetting that the dead must cross before resting in Hades), and so the truth is implicitly an unconcealment that recovers the meanings implicit in language.  The authentic meaning of jargon, Fowler similarly claims, can be arrived at if we remove the accretions caused by “semantic diffusion” and get back to the original intent.


But is this true?  Do apologetics for terms such as “Web 2.0” and “Agile” insisting that they are “jargon” ultimately succeed?  Do such attempts reveal the original intent implicit in the coining of these terms or do they simply conceal the original meanings even further?


My personal opinion is that jargon, by its nature, never really reveals, but always in one way or another, by condensing thought and providing a shorthand for ideas, conceals.  It can of course be useful, but it can never be instructive, but rather gives us a sense that we understand things we do not understand simply because we know how to use a given jargon.  At best, jargon can be used as an indicator that points to a complex of ideas shared by a given community.  At worse, it is used as shorthand for bad or incoherent ideas that never themselves get critical treatment because the jargon takes the place of ideas, and becomes mistaken for ideas.  This particularly seems to be the case with the defense of “Web 2.0” and “Agile” as “jargon”, as if people have a problem with the terms themselves rather than what they stand for.  “Jargon”, as a technical term, is not particularly useful.  It is to some extent already corrupt from the get-go.


One way around this might be to simply stop using the term “jargon”, whether bad or good, when discussing things like Web 2.0 and Agile.  While it is common in English to use Latin derived terms for technical language and Anglo-Saxon words for common discourse, in this case we might be obliged to make the reverse movement as we look for an adequate replacement term for “jargon”.


In 2005, the Princeton philosopher Harry Frankfurt published a popular pamphlet called On Bullshit that attempts to give a philosophical explanation of the term. On first blush, this title may seem somewhat prejudicial, but I think that, as with jargon, if we get away from pre-conceived notions as to whether the term is good or bad, it will be useful as a way to get a fresh look at the term we are currently trying to evaluate, “Web 2.0”.  It can also be used most effectively if we do the opposite of what we did with “jargon”; jargon was first taken to appropriately describe the term “Web 2.0”, and then an attempt was made to understand what jargon actually was.  In this case, I want to first try to understand what bullshit is, and then see if it applies to “Web 2.0”.


Frankfurt begins his analysis with a brief survey of the literature on bullshit, which includes Max Black’s study of “humbug” and Augustine of Hippo’s analysis of lying.  From these, he concludes that bullshit and lying are different things, and as a preliminary conclusion, that bullshit falls just short of lying.  Moreover, he points out that it is all pervasive in a way that lying could never be.



The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.


Not satisfied with this preliminary explanation, however, Frankfurt identifies further elements that characterize bullshit, since there are many things that can fall short of a lie and yet, perhaps, not rise to the level of bullshit.  He then identifies inauthenticity as the hallmark that distinguishes bullshit from lies, on the one hand, and simple errors of fact, on the other.



For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.


 


It is not what a bullshitter says, then, that marks him as a bullshitter, but rather his state-of-mind when he says it.  For Frankfurt, bullshit doesn’t really even belong on the same continuum with truth and falsehood, but is rather opposed to both.  Like the Third Host in Dante’s Inferno, it is indifference to the struggle that ultimately identifies and marks out the class of bullshitters.


Again, there are echoes of Heidegger here.  According to Heidegger, we are all characterized by this “thrownness”, which is the essence of our “Being-In-The-World”.  In our thrownness, we do not recognize ourselves as ourselves, but rather as das Man, or as the they-self,



which we distinguish from the authentic Self – that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself.” And further “If Dasein discovers the world in its own way [eigens] and brings it close, if it discloses to itself its own authentic Being, then this discovery of the ‘world’ and this disclosure of Dasein are always accomplished as a clearing-away of concealments and obscurities, as a breaking up of the disguises with which Dasein bars its own way.


The main difference between Frankfurt’s and Heidegger’s analysis of authenticity, in this case, is that Frankfurt seems to take authenticity as normative, whereas Heidegger considers authenticity as the zero-point state of man when we are first thrown into the world.


For now, however, the difference isn’t all that important.  What is important is Frankfurt’s conclusion about the sources of bullshit.  At the end of his essay, Frankfurt in effect writes that there are two kinds of bullshit, one of which is defensible and one of which is not.  The indefensible kind of bullshit is based on a subjectivist view of the world which denies truth and falsity altogether (and here I take Frankfurt to be making a not too veiled attack on the relativistic philosophical disciplines that are based on Heidegger’s work).  The defensible form of bullshit — I hesitate to call it good bullshit — is grounded in the character of our work lives, which force us to work with and represent information that is by its nature too complex for us to digest and promulgate accurately.  This, I take it, is the circumstance academic lecturers and others frequently find themselves in, as the stand behind the podium and are obliged to talk authoritatively about subjects they do not feel up to giving a thorough, much less an authentic, account of.



Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.


 


This class of speech is the result of our inability to apply Wittgenstein’s dictum, “Whereof one cannot speak, thereof one must be silent.”  There are times when we are not in a position to remain silent, and so are obligated to bullshit.  Bullshit, in these cases, is a way of making the best of our situation.


Per the original arrangement, it is now time to put  “bullshit” to the test and see if either cynical bullshit or benign bullshit can be ascribed to the term “Web 2.0”.  For better or worse, I am going to use Jeffrey Zeldman’s blog on Web 2.0 (titled, confusingly enough, “Web 3.0”) as the main text for this analysis.  Zeldman is generally sympathetic to the ideas and phenomena the “Web 2.0” is meant to encompass, but he also points out the aspects of the term that grate.  The most salient is the degree to which it smells like a sales pitch.


 



It soon appeared that “Web 2.0” was not only bigger than the Apocalypse but also more profitable. Profitable, that is, for investors like the speaker. Yet the new gold rush must not be confused with the dot-com bubble of the 1990s:

“Web 1.0 was not disruptive. You understand? Web 2.0 is totally disruptive. You know what XML is? You’ve heard about well-formedness? Okay. So anyway—”

And on it ran, like a dentist’s drill in the Gulag.


Zeldman associates Web 2.0 with marketing, which Frankfurt in turn associates with bullshit.  Frankfurt even goes so far as identifying sales and its related disciplines as “the most indisputable and classic paradigms of the concept.”  Moreover, the defense that Web 2.0 describes a real phenomenon, as Fowler insists and Zeldman grants, doesn’t make it not bullshit, since Frankfurt concedes that bullshit can just as well be true as false.  What is important is the authenticity or inauthenticity of the original claim, and the sense that something is a sales pitch is already an indication that something inauthentic is going on.  So “Web 2.0” certainly meets Frankfurt’s criteria for bullshit.


The more important question is what kind of bullshit is it?  Is it benign, or cynical?  According to Frankfurt’s distinction, again, the difference is whether the bullshit is grounded in the nature of one’s work or rather in some sort of defect of epistemic character.


Here the answer is not so simple, I think, since software has two strands, one going to the hobbyist roots of programming, and the other to the monetizing potential of information technology.  Moreover, both strands tend to struggle within the heart of the software engineering industry, with the open source movement on the one hand (often cited as one key aspect of Web 2.0) emblematic of the purist strain, and the advertising prospects on the other (with Google in the vanguard, often cited as a key exemplar of the Web 2.0 phenomena) symbolic of the notion that a good idea isn’t enough — one also has to be able to sell one’s ideas.


Software programming, in its origins, is a discipline practiced by nerds.  In other words, it is esoteric knowledge, extremely powerful, practiced by a few and generally misunderstood by the majority of people.  As long as there is no desire to explain the discipline to outsiders, there is no problem with treating software programming as a hobby.  At some point, however, every nerd wants to be appreciated by people who are not his peers, and to accomplish this, he is forced to explain himself and ultimately to sell himself.  The turning point for this event is well documented, and occurred on February 3rd, 1973, when Bill Gates wrote an open letter to the hobbyist community stating that software had economic value and that it was time for people to start paying for it. 


This was a moment of triumph for nerds everywhere, though this was not at first understood, and still generates resentment to this day, because it irrevocably transformed the nature of software programming.  Once software was recognized as something of economic value, it also became clear that software concepts now had to be marketed.  The people who buy software are typically unable to distinguish good software from bad software, and so it becomes the responsibility of those who can to try to explain why their software is better in terms that are not, essentially, technical.  Instead, a hybrid jargon-ridden set of terms had to be created in order to bridge the gap between software and the business appetite for software.  Software engineers, in turn, learned to see selling themselves, to consumers, to their managers, and finally to their peers, as part of the job of software engineering — though at the same time, this forced obligation to sell themselves continues to be regarded with with suspicion and resentment.  The hope held out to such people is that through software they will eventually be able to make enough money, as Bill Gates did, as Steve Jobs did, to finally give up the necessity of selling themselves and return to a pure hobbyist state-of-mind once again.  They in effect want to be both the virgin and the whore.  This is, of course, a pipe dream.


Consequently, trying to determine whether Web 2.0 is benign bullshit or cynical bullshit is difficult, since sales both is and is not an authentic aspect of the work of software engineering.  What seems to be the case is that Web 2.0 is a hybrid of benign and cynical bullshit.  This schizophrenic character is captured in the notion of Web 2.0 itself, which is at the same time a sales pitch as well as an umbrella term for a set of contemporary cultural phenomena.


Now that we know what bullshit is, and we know that Web 2.0 is bullshit, it is time to evaluate what Web 2.0 is.  In Tim O’Reilly’s original article that introduced the notion of Web 2.0, called appropriately What Is Web 2.0, O’Reilly suggests several key points that he sees as typical of the sorts of things going on over the past year at companies such as Google, Flikr, YouTube and Wikipedia.  These observations include such slogans as “Harnessing Collective Intelligence”, “Data is the Next Intel Inside” and “End of the Software Release Cycle”.  But it is worth asking if these really tell us what Web 2.0 is, or if they are simply ad hoc attempts to give examples of what O’Reilly says is a common phenomenon?  When one asks for the meaning of terms such as Web 2.0, what one really wants is the original purpose behind coining the term.  What is implicit in the term Web 2.0, as Heidegger would put it, that at the same time is concealed by the language typically used to explain Web 2.0.


As Zeldman points out, one key (and I think the main key) to understanding Web 2.0 is that it isn’t Web 1.0.  The rise of the web was marked by a rise in bluff and marketing that created what we now look back on as the Internet Bubble.  The Internet Bubble, in turn, was a lot of marketing hype and the most remarkable stream of jargon used to build up a technology that, in the end, could not sustain the amount of expectation with which it was overloaded.  By 2005, this bad reputation that had accrued to the Web from the earlier mistakes had generated a cynicism about the new things coming along that really were worthwhile — such as the blogging phenomenon, Ajax, Wikipedia, Google, Flikr and YouTube.  In order to overcome the cynicism, O’Reilly coined a term that, successfully, distracted people from the earlier debacle and helped to make the Internet a place to invest money once again.  Tim O’Reilly, even if his term is bullshit, as we have already demonstrated above, ultimately has done us all a service by clearing out all the previous bullshit.  In a very Heideggerian manner, he made a clearing [Lichtung] for the truth to appear.  He created, much as Heidegger attempted to do for Being, a conceptual space for new ideas about the Internet to make themselves apparent.


Or perhaps my analogy is overwrought. In any case, the question still remains as to what one does with terms that have outlived their usefulness.  In his introduction to Existentialism and Human Emotions, Jean-Paul Sartre describes the status the term “existentialism” has achieved by 1957.



Someone recently told me of a lady who, when she let slip a vulgar word in a moment of irritation, excused herself by saying, “I guess I’m becoming an existentialist.”



Most people who use the word would be rather embarrassed if they had to explain it, since, now that the word is all the rage, even the work of a musician or painter is being called existentialist.  A gossip columnist in Clartes signs himself The Existentialist, so that by this time the word has been so stretched and has taken on so broad a meaning, that it no longer means anything at all.


 


Sartre spent most of the rest of his philosophical career refining and defending the term “existentialism,” until finally it was superceded by post-structuralism in France. The term enjoyed a second life in America, until post-structuralism finally made the Atlantic crossing and superceded it there, also, only in turn to be first treated with skepticism, then with hostility, and finally as mere jargon.  It is only over time that an intellectual clearing can be made to re-examine these concepts.  In the meantime, taking a cue from Wittgenstein, we are obliged to remain silent over them.

Long Dark Night of the Compiler


In his book on the development the C++ language, The Design and Evolution of C++, Bjarne Stroustrup says that in creating C++ he was influenced by the writings of Søren Kierkegaard.  He goes into some detail about it in this recent interview:


 



A lot of thinking about software development is focused on the group, the team, the company. This is often done to the point where the individual is completely submerged in corporate “culture” with no outlet for unique talents and skills. Corporate practices can be directly hostile to individuals with exceptional skills and initiative in technical matters. I consider such management of technical people cruel and wasteful. Kierkegaard was a strong proponent for the individual against “the crowd” and has some serious discussion of the importance of aesthetics and ethical behavior. I couldn’t point to a specific language feature and say, “See, there’s the influence of the nineteenth-century philosopher,” but he is one of the roots of my reluctance to eliminate “expert level” features, to abolish “misuses,” and to limit features to support only uses that I know to be useful. I’m not particularly fond of Kierkegaard’s religious philosophy, though.


 


Stroustrup is likely referring to philosophical observations such as this:


 



Truth always rests with the minority, and the minority is always stronger than the majority, because the minority is generally formed by those who really have an opinion, while the strength of a majority is illusory, formed by the gangs who have no opinion–and who, therefore, in the next instant (when it is evident that the minority is the stronger) assume its opinion . . . while Truth again reverts to a new minority.

— Søren Kierkegaard

 


Coincidentally, Kierkegaard and Pascal are often cited as the fathers of modern existentialism, and where Kierkegaard appears to have influenced the development of C++, Pascal’s name lives on in the Pascal programming language as well as the Pascal case, used as a stylistic device in most modern languages.  The Pascal language, in turn, was contemporary with the C language, which was the syntactic precursor to C++.


So just as the Catholic Church holds that guardian angels guide and watch over individuals, cities and nations, might it not also be the case that specific philosophers watch over different programming languages?  Perhaps a pragmatic philosopher like C. S. Peirce would watch over Visual Basic.  A philosopher fond of architectonics, like Kant, would watch over Eiffel.  John Dewey could watch over Java, while Hegel, naturally, would watch over Ruby.

The Topsy-Turvy World: Spy Versus Spy


Ian Fleming’s spy novels are often compared to John Le Carré’s.  The comparisons often find James Bond to be wanting.  In contrast to the emotional richness of Le Carré’s internally conflicted heroes, Bond is often presented by his critics as a cardboard cutout with an overly simplistic view of the world.  Bond fights for crown and country.  Alec Leamas and George Smiley, on the other hand, realize that things are much more complicated than that.  Fleming presented a 50’s version of the world where we all had just left off making the world safe for democracy, and still naively saw the cold war in black and white terms.  Le Carré, on the other hand, by drawing attention to the moral ambiguity at the heart of our conflict with the Soviets, turns James Bond on his head.


Or does he?  Written in 1953, ten years before The Spy Who Came In From The Cold, Ian Fleming’s first Bond novel Casino Royale includes this surprising piece of introspection from 007:



“Well, in the last few years I’ve killed two villians.  The first was in New York — a Japanese cipher expert cracking our codes on the thirty-sixth floor of the RCA building in the Rockefeller centre…. It was a pretty sound job.  Nice and clean too.  Three hundred yards away.  No personal contact.  The next time in Stockholm wasn’t so pretty.  I had to kill a Norwegian who was doubling against us for the Germans…. For various reasons it had to be an absolutely silent job.  I chose the bedroom of his flat and a knife.  And, well, he just didn’t die very quickly.


“For those two jobs I was awarded a Double O number in the Service.  Felt pretty clever and got a reputation for being good and tough.  A Double O number in our Service means you’ve had to kill a chap in cold blood in the course of some job.


“Now,” he looked up again at Mathis, “that’s all very fine.  The hero kills two villians, but when the hero Le Chiffre starts to kill the villain Bond and the villain Bond knows he isn’t a vilain at all, you see the other side of the medal.  The villains and heroes get all mixed up.


“Of course,” he added, as Mathis started to expostulate, “patriotism comes along and makes it seem fairly all right, but this country-right-or-wrong business is getting a little out-of-date.  Today we are fighting Communism.  Okay.  If I’d been alive fifty years ago, the brand of Conservatism we have today would have been damn near called Communism and we should have been told to go and fight that.  History is moving pretty quickly these days and the heroes and villains keep on changing parts.”


Mathis stared at him aghast.  Then he tapped his head and put a calming hand on Bond’s arm.


“You mean to say that this precious Le Chiffre who did his best to turn you into a eunuch doesn’t qualify as a villain?” he asked…. “And what about SMERSH?  I can tell you I don’t like the idea of these chaps running around France killing anyone they feel has been a traitor to their precious political system.  You’re a bloody anarchist.”


He threw his arms in the air and let them fall helplessly to his sides.


Bond laughed.


“All right,” he said.  “Take our friend Le Chiffre.  It’s simple enough to say he was an evil man, at least it’s simple enough for me because he did evil things to me.  If he was here now, I wouldn’t hesitate to kill him, but out of personal revenge and not, I’m afraid, for some high moral reason or for the sake of my country.”


He looked up at Mathis to see how bored he was getting with these introspective refinements of what, to Mathis, was a simple question of duty.


Mathis smiled back at him.


 



Le Carré attempts to preserve us from full surrender to the topsy-turvy world by making it asymptotic to ourselves.  it is a point of evil, or the transvaluation of all morals, that his heroes are always approaching but also always stay just to this side of.  In this way, the Cold War becomes a metaphor for life itself.


Fleming’s hero actually goes beyond this point, in the very first 007 novel, and comes out the other side.  The lack of moral ambiguity for which Bond is so frequently criticized is not due to the fact that he doesn’t see it. Rather he sees it and surpasses it.


In order to keep Bond out of this topsy-turvy world, where good is evil and evil good, Fleming is obliged to provide his hero with a series of sufficiently evil villains.  First there was SMERSH, the Soviet counterintelligence and murder agency whose job it was to keep the people of the Eastern Block in line through intimidation and fear.  After a time, this was in turn replaced by SPECTRE, a world-wide terrorist organization bent on world domination (perhaps an example of art anticipating life).


Le Carré similarly requires the latticework of the Cold War in order to sustain his aesthetic-moral structure, and it is telling that following the collapse of the Soviet empire, his novels have become more simple David versus Goliath narratives with clear good guys (whistleblowers) and clear bad guys (international corporations) — in a sense, more like the traditional Bond narrative.


 



“So” continued Bond, warming to his argument, “Le Chiffre was serving a wonderful purpose, a really vital purpose, perhaps the best and the highest purpose of all.  By his evil existence, which foolishly I have helped to destroy, he was creating a norm of badness by which, and by which alone, an opposite norm of goodness could exist.  We were privileged, in our short knowledge of him, to see and estimate his wickedness and we emerge from the acquaintanceship better and more virtuous men.”


“Bravo,” said Mathis. “I’m proud of you.  You ought to be tortured every day…. That was enjoyable, my dear James.  you really ought to go on the halls.  Now about that little problem of yours, this business of not knowing good men from bad men and villains from heroes, and so forth.  It is, of course, a difficult problem in the abstract.  The secret lies in personal experience, whether you’re a Chinaman or an Englishman.”


He paused at the door.


“You admit that Le Chiffre did you personal evil and that you would kill him if he appeared in front of you now?


“Well, when you get back to London you will find there are other Le Chiffres seeking to destroy you and your friends and your country.  M will tell you about them.  And now that you have seen a really evil man, you will know how evil they can be and you will go after them to destroy them in order to protect yourself and the people you love.   you won’t wait to argue about it.  You know what they look like now and what they can do to people.  You may be a bit more choosy about the jobs you take on.  You may want to be certain that the target really is black, but there are plenty of really black targets around.  There’s still penty for you to do.  And you’ll do it….”


Mathis opened the door and stopped on the threshold.


“Surround yourself with human beings, my dear James.  They are easier to fight for than principles.”

Converting to ASP.NET Ajax Beta 2 (A Guide for the Perplexed)


There are a few good guides already on the internet that provide an overview of what is required to convert your Atlas CTP projects to Ajax Extensions.  This guide will probably not add anything new, but will hopefully consolidate some of the advice already provided, as well as offer a few pointers alluded to by others but not explained.  In other words, this is the guide I wish I had before I began my own conversion project.


1. The first step is to download install the Ajax Extensions beta 2 and the Ajax Futures (value added-) November CTP.  One problem I have heard of occurred when an associate somehow failed to remove his beta 1 dlls, and had various mysterious errors due to using the wrong version. 


2. Create a new Ajax Extensions project. This should provide you with the correct library references and the correct web configuration file.  Here are the minimum configuration settings needed for an ASP.Net Ajax website to work:



</configuration>


     <system.web>
     <pages>
     <controls>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
           <add tagPrefix=”asp” namespace=”Microsoft.Web.UI.Controls” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
           <add tagPrefix=”asp” namespace=”Microsoft.Web.Preview.UI” assembly=”Microsoft.Web.Preview”/>
     </controls>


     <compilation debug=”true”>
          <assemblies>
                <add assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35″/>
          </assemblies>
     </compilation>

</configuration>

 


You also need to make sure that you have a reference to the Microsoft.Web.Extensions dll as well as to the Microsoft.Web.Preview dll, if you intend to use features such as drag and drop or glitz. Both of these dlls should be registered in the GAC, although it wasn’t for me.  To make sure it was available in the GAC, I had to add a new registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders\ASP.NET AJAX 1.0.61025  with a default value indicating the location of the ASP.Net Ajax dlls: c:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025″


On a side note, there seems to currently be some ambiguity over whether the Microsoft.Web.Extensions dll can or cannot simply be placed in your bin folder rather than placed in the GAC.  It seems to work, even though the official documentation says it should not.


 


3. Wherever you used to use the shortcut “$” as a shorthand for “document.getElementsById“, you will now need to use “$get” .  I usually need to go through my Atlas code three or four times before I catch every intance of this and make the appropriate replacement.


 


4. Sys.Application.findControl(“myControl”) is now simplified to $find(“myControl”).


 


5. Wherever you used to use this.control.element, you now will use this.get_element().


 


6. The “atlas:” namespace has been replaced with the “asp:” namespace, so go through your code and make the appropriate replacements.  For example,



<atlas:ScriptManager ID=”ScriptManager1″ runat=”server”/>


is now



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”/>


 


7. Script References have changed.  The ScriptName attribute is now just the Name attribute.  The files that used to make up the optional ajax scripts are now broken out differently, and so if you need to use the dragdrop script file or the glitz script file, you now will also need to include PreviewScript javascript file.  This:



 


<atlas:ScriptManager ID=”ScriptManager1″ runat=”server”>
     <Scripts>
          <atlas:ScriptReference ScriptName=”AtlasUIDragDrop” />
          <atlas:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
     </Scripts>
</atlas:ScriptManager>


is now this:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
     <Scripts>
          <asp:ScriptReference Assembly=”Microsoft.Web.Preview” Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
          <asp:ScriptReference Assembly=”Microsoft.Web.Preview” Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
          <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
     </Scripts>
</asp:ScriptManager>


 


8. Namespaces have changed, and you may need to hunt around to find your classes.  For instance, Sys.UI.IDragSource is now Sys.Preview.UI.IDragSource, and for the most part you can probably get away with replacing all your Sys.UI namespaces with Sys.Preview.UI.  On the other hand, Sys.UI.Behavior has stayed where it is, so this is not always going to be the case.  The method setLoctation has also shifted namespaces.  It used to be found in Sys.UI.  It is now in Sys.UI.DomElement.


 


9. Xml Scripting change: Xml scripting, which allows you to use javascript in a declarative manner, is now part of the Value Added CTP.  As I understand it, the Value Added CTP, also known as Ajax Futures, includes lots of stuff originally included in the Atlas CTP but deemed to be of lower priority than the core Ajax Extensions features.  In order to meet a tough deadline, these have been set aside for now.  The Ajax Toolkit, in turn, is heavily dependent on these value added features, since the toolkit components tend to leverage the common javascript libraries such as Glitz much more than the specifically Ajax features provided with the core release.  The syntax for adding custom behaviors using Xml Scripting has changed, while the syntax for built in behaviors is the same.  An Xml Scripting region used to look like this:



 


<script type=”text/xml-script”>
   <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″>
      <components>
         <control id=”dropZone”>
           <behaviors>
               <DropZoneBehavior/>
           </behaviors>
         </control>
         <control id=”draggableDiv”>
           <behaviors>
             <floatingBehavior handle=”handleBar” />
           </behaviors>
         </control>
      </components>
  


Now it looks like this:


<script type=”text/xml-script”>
   <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″
xmlns:fooNamespace=”Custom.UI”>
      <components>
        <control id=”dropZone”>
          <behaviors>
            <fooNamespace:DropZoneBehavior/>
          </behaviors>
        </control>
      <control id=”draggableDiv”>
         <behaviors>
              <floatingBehavior handle=”handleBar” />
         </behaviors>
      </control>
    </components>
  </page>
</script>


Note: The AspNet AJAX CTP to Beta Whitepaper has a slightly different syntax, but this appears to be a typo, and the one I have provided above is the correct grammar.


10.  Adding behaviors using javascript has changed.  The biggest thing is that you no longer explicitly have to convert a DOM object to an ASP.Net Ajax object, as this is now done beneath the covers.  The get_behaviors().add(…) method has also been retired.  For my particular conversion, this code:



function addFloatingBehavior(ctrl, ctrlHandle){
     var floatingBehavior = new Sys.UI.FloatingBehavior();
     floatingBehavior.set_handle(ctrlHandle);
     var dragItem = new Sys.UI.Control(ctrl);
     dragItem.get_behaviors().add(floatingBehavior);
     floatingBehavior.initialize();
     }



got shortened to this:



function addFloatingBehavior(ctrl, ctrlHandle){
     var floatingBehavior = new Sys.Preview.UI.FloatingBehavior(ctrl);
     floatingBehavior.set_handle(ctrlHandle);
     floatingBehavior.initialize();
     }


This can in turn be shortened even further with the $create super function: 



function addFloatingBehavior(ctrl, ctrlHandle){


   $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);


}


 


11.  Closures and Prototypes:


You ought to convert javascript classes written as closures to classes written as prototypes.  Basically, instead of having private members, properties and methods all in the same place (called, it turns out, “closures”), they are now separated out into an initial definition that includes the members, and then a definition of the prototype that includes the various methods and properties, which are in turn rewritten using a slightly different grammar.  Here is a reasonably good overview of what the prototype object is used for.  Bertand LeRoy‘s two posts on closures and prototypes is also a good resource.


12. You basically follow the following steps to mechanically rewrite a closure as a prototype. First, change all your private variable declarations into public member declarations.  For instance, the following declaration:



var i = 0;


should now be:



this.i = 0;


 


Consolidate all of your members at the top and then place a close bracket after them to close your class definition.


13.  Start the first line of code to define your prototype.  For instance, in my dropzonebehavior class, I replaced this:



 Custom.UI.DropZoneBehavior = function() {
     Custom.UI.DropZoneBehavior.initializeBase(this);
     initialize: function(){
          Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
          // Register ourselves as a drop target.
          Sys.Preview.UI.DragDropManager.registerDropTarget(this);
          }


}


with this:



Custom.UI.DropZoneBehavior = function() {
       Custom.UI.DropZoneBehavior.initializeBase(this);
}



Custom.UI.DropZoneBehavior.prototype = {
     initialize: function(){
             Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
            // Register ourselves as a drop target.
            Sys.Preview.UI.DragDropManager.registerDropTarget(this); 
            }


}


simply by adding these two lines:



}



Custom.UI.DropZoneBehavior.prototype = {


 


14. Throughout the rest of the prototype definition, refer to your variables as members by adding


this.

in front of all of them.


 


15. Interfaces have changed.  The bahavior class, which did not used to take a parameter, now does:



Custom.UI.FloatingBehavior = function(value) {
    Custom.UI.FloatingBehavior.initializeBase(this,[value]);

}

 


16. Properties and methods are written differently in the prototype definition than they were in closures.  Wherever you have a method or property, you should rewrite it by getting rid of the preceding “this.” and replacing the equals sign in your method definition with a colon.  Finally, a comma must be inserted after each method or property definition except the last.  For example, this:



this.initialize = function() {
    Custom.UI.FloatingBehavior.callBaseMethod(this, ‘initialize’);

}


becomes this:


 



initialize: function() {
     Custom.UI.FloatingBehavior.callBaseMethod(this, ‘initialize’);

},


 


17. Type descriptors are gone.  This means you no longer need the getDescriptor method or the Sys.TypeDescriptor.addType call to register your Type Descriptor.  There is an alternate grammar for writing type descriptors using JSON, but my code worked fine without it.  I think it is meant for writing extenders.


 


18. Hooking up event handlers to DOM events has been simplified.  You used to need to define a delegate for the DOM event, and then use the attachEvent and detachEvent methods to link the delegate with your handler function.  In the beta 2, all of this is encapsulated and you will only need two super functions, $addHandler and $removeHandler.  You should probably place your $addHandler method to your initialize method, and $removeHandler to your dispose method.  The syntax for $addHandler will typically look like this:


$addHandler(this.get_element(), ‘mousedown’, YourMouseDownHandlerFunction)

$removeHandler takes the same parameters.  One thing worth noting is that, whereas the reference to the DOM event used to use the IE specific event name, in this case ‘onmousedown’, the designers of ASP.Net Ajax have now opted to use the naming convention adopted by Firefox and Safari. 


 


19. The last touch: add the following lines as the last bit of code in your script file:



if(typeof(Sys) !== “undefined”)
Sys.Application.notifyScriptLoaded();


You basically just need to do this.  It may even be one of the rare instances in programming where you don’t even need to know why you are doing it since, as far as I know, you will never encounter a situation where you won’t put it in your script.  My vague understanding of the reason, though, is that the ASP.Net Ajax page lifecycle needs to know when scripts are loaded; both IE and Firefox throw events when a page has completed loading.  Safari, however, does not.  notifyScriptLoaded() provides a common way to let all browsers know when scripts have been loaded and it is safe to work with the included classes and functions.


 


 


Bibliography (of sorts):


Here are the good guides I referred to at the top of this post: Bertrand LeRoy‘s post on javascript prototypes, Eilon Lipton‘s blog, the comments here: Scott Guthrie, Sean Burke‘s migration guide, Miljan Braticevic‘s experience with upgrading the Component Art tools.  The most comprehensive guide to using Ajax Extensions beta 2 is actually the upgrade guide provided by Microsoft Ajax Team here: AspNet AJAX CTP to Beta Whitepaper. I used the official online documentation, http://ajax.asp.net/docs/Default.aspx, mainly to figure out which namespaces to use and where the various functions I needed had been moved to.  Finally, using the search functionality on the ASP.Net Ajax forums helped me get over many minor difficulties.

V. ASP.NET Ajax Imperative Dropzones


 


To create dropzones using JavaScript instead of declarative script, just add the following JavaScript function to initialize your dropzone element with the custom dropzone behavior:


function addDropZoneBehavior(ctrl){

$create(Custom.UI.DropZoneBehavior, {}, null, null, ctrl);
}


To finish hooking everything up, call this addDropZoneBehavior function from the ASP.NET Ajax pageLoad() method, as you did in earlier examples for the addFloatingBehavior function.  This will attach the proper behaviors to their respective html elements and replicate the drag and dropzone functionality you created above using declarative markup.  If you want to make this work dynamically, just add the createDraggableDiv() function you already wrote for the previous dynamic example.  As a point of reference, here is the complete code for creating programmatic dropzones:



<%@ Page Language=”C#” %>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html xmlns=”http://www.w3.org/1999/xhtml” >
<head id=”Head1″ runat=”server”>
<title>Imperative Drop Targets</title>
<script type=”text/javascript”>
    function addFloatingBehavior(ctrl, ctrlHandle){
        $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);
    }
    function addDropZoneBehavior(ctrl){
        $create(Custom.UI.DropZoneBehavior, {}, null, null, ctrl);
    }
    function pageLoad(){
        addDropZoneBehavior($get(‘dropZone’));
        addFloatingBehavior($get(‘draggableDiv’),$get(‘handleBar’));
    }
</script>
</head>
<body>
<form id=”form1″ runat=”server”>
<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
            <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript” />
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop” />
        <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
    </Scripts>
</asp:ScriptManager>
<h2>Imperative Drop Targets with javacript</h2>
<div style=”background-color:Red;height:200px;width:200px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>
<div id=”dropZone” style=”background-color:cornflowerblue;height:200px;width:200px;”>Drop Zone</div>
</form>
</body>
</html>

 

Conclusion


Besides the dropzone behavior, you may want to also write your own floating behavior. For instance, by default, elements decorated with the floating behavior simply stay where you drop them. You may want to extend this so that your floating div will snap back to its original location when you drop it outside of a drop zone. Additionally, you may want to change the way the dragged element looks while you are dragging it, either by making it transparent, changing its color, or replacing the drag image altogether. All this can be accomplished by creating a behavior that implements the IDragSource interface in the same way you created a custom class that implements the IDropTarget interface.


This tutorial is for the most part a straight translation of the original Atlas tutorial that I wrote against the April CTP.  Even though many of the concepts behind Atlas are still retained in Ajax Extensions, some have changed by a turning of the screw so that what was once fitting and accurate in the original tutorial is no longer quite so.  For instance, whereas in the original Atlas tutorial I could talk about Xml Scripting and the rest of the ASP.NET Ajax functionality as one technology, they are now currently two varying technologies with different levels of support and interest for Microsoft.  There are more subtle differences that, I think, make the current version of the tutorial somewhat dated, as if I am saying everthing with a slight accent; in other words, while I stand by the accuracy of this tutorial, I think it has lost some of its original elegance in the translation.  I believe the tutorial will still be useful for those trying to get started with Microsoft’s Ajax implementation, though it’s chief utility, at this point, will probably be for people who were used to the Atlas way of doing things and need a point of reference to see how the semantics of the technology has changed. I hope the samples will help you over some of your growing pains, as writing it has helped me with mine.

IV. ASP.NET Ajax Declarative Dropzones


 



Being able to drag html elements around a page and have them stay where you leave them is visually interesting. To make this behavior truly useful, though, an event should be thrown when the drop occurs.  Furthermore, the event that is thrown should depend on where the drop occurs.  In other words, there needs to be behavior that can be added to a given html element that will turn it into a “dropzone” or a “drop target”,  the same way that the floating behavior can be added to an html div tag to turn it into a drag and drop element.

In the following examples, I will show how Atlas supports the concept of dropzones.  In its current state, Atlas does not support an out-of-the-box behavior for creating dropzone elements in quite the same way it does for floating elements.  It does, however, implement behaviors for a dragdroplist element and a draggablelistitem element which, when used together, allow you to create lists that can be reordered by dragging and dropping.  If you would like to explore this functionality some more, there are several good examples of using the dragDropList behavior on the web, for instance, Introduction to Drag And Drop with Atlas.

The main disadvantage of the dragdropzone behavior is that it only works with items that have been decorated with the DragDropList behavior. The functionality that this puts at your disposal is fairly specific. To get the sort of open-ended dropzone functionality I described above, that will also work with the predefined floating behavior, you will need to write your own dropzone behavior class in JavaScript. Fortunately, this is not all that hard.


Atlas adds several OOP extensions to JavaScript in order to make it more powerful, extensions such as namespaces, abstract classes, and interfaces. You will take advantage of these in coding up your own dropzone behavior. If you peer behind the curtain and look at the source code in the PreviewDragDrop.js file, (contained in the directory C:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025\ScriptLibrary\Debug), you will find several interfaces defined there, including one for Sys.UI.DragSource and one for Sys.UI.DropTarget. In fact, both the FloatingBehavior class and the DraggableListItem class implement the Sys.UI.DragSource interface, while Sys.UI.DropTarget is implemented by the DragDropList class. The code for these two interfaces looks like this:



Sys.Preview.UI.IDragSource = function Sys$Preview$UI$IDragSource() {
}


Sys.Preview.UI.IDragSource.prototype = {
      get_dragDataType: Sys$Preview$UI$IDragSource$get_dragDataType,
      getDragData: Sys$Preview$UI$IDragSource$getDragData,
      get_dragMode: Sys$Preview$UI$IDragSource$get_dragMode,
      onDragStart: Sys$Preview$UI$IDragSource$onDragStart,
      onDrag: Sys$Preview$UI$IDragSource$onDrag,
      onDragEnd: Sys$Preview$UI$IDragSource$onDragEnd
}
Sys.Preview.UI.IDragSource.registerInterface(‘Sys.Preview.UI.IDragSource’);

Sys.Preview.UI.IDropTarget = function Sys$Preview$UI$IDropTarget() {
}


Sys.Preview.UI.IDropTarget.prototype = {
     get_dropTargetElement: Sys$Preview$UI$IDropTarget$get_dropTargetElement,
     canDrop: Sys$Preview$UI$IDropTarget$canDrop,
     drop: Sys$Preview$UI$IDropTarget$drop,
     onDragEnterTarget: Sys$Preview$UI$IDropTarget$onDragEnterTarget,
     onDragLeaveTarget: Sys$Preview$UI$IDropTarget$onDragLeaveTarget,
     onDragInTarget: Sys$Preview$UI$IDropTarget$onDragInTarget
}
Sys.Preview.UI.IDropTarget.registerInterface(‘Sys.Preview.UI.IDropTarget’);


Why do you need to implement these interfaces instead of simply writing out brand new classes to support drag, drop, and dropzones? The secret is that, behind the scenes, a third class, called the DragDropManager, is actually coordinating the interactions between the draggable elements and the dropzone elements, and it only knows how to work with classes that implement the IDragSource or the IDropTarget. The DragDropManager class registers which dropzones are legitimate targets for each draggable element, handles the MouseOver events to determine when a dropzone has a draggable element over it, and a hundred other things you do not want to do yourself. In fact, it does it so well that the dropzone behavior you are about to write is pretty minimal. First, create a new JavaScript file called DropZoneBehavior.js. I placed my JavaScript file under a subdirectory called scriptLibrary, but this is not necessary in order to make the dropzone behavior work. Next, copy the following code into your file:



Type.registerNamespace(‘Custom.UI’);
Custom.UI.DropZoneBehavior = function(value) {
 Custom.UI.DropZoneBehavior.initializeBase(this, [value]);


}


Custom.UI.DropZoneBehavior.prototype = {
    initialize:  function() {
        Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘initialize’);
        // Register ourselves as a drop target.
        Sys.Preview.UI.DragDropManager.registerDropTarget(this);
        },
    dispose: function() {
        Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘dispose’);
        },
    getDescriptor: function() {
        var td = Custom.UI.DropZoneBehavior.callBaseMethod(this, ‘getDescriptor’);
        return td;
        },
    // IDropTarget members.
    get_dropTargetElement: function() {
        return this.get_element();
        },
    drop: function(dragMode, type, data) {
        alert(‘dropped’);
        },
    canDrop: function(dragMode, dataType) {
        return true;
        },
    onDragEnterTarget: function(dragMode, type, data) {
        },
    onDragLeaveTarget: function(dragMode, type, data) {
        },
    onDragInTarget: function(dragMode, type, data) {
        }
}
Custom.UI.DropZoneBehavior.registerClass(‘Custom.UI.DropZoneBehavior’, Sys.UI.Behavior, Sys.Preview.UI.IDragSource, Sys.Preview.UI.IDropTarget, Sys.IDisposable);
if(typeof(Sys) != “undefined”) {Sys.Application.notifyScriptLoaded();}



I need to explain this class a bit backwards.  The first thing worth noticing is the second to last line that begins “Custom.UI.DropZoneBehavior.registerClass.”  This is where the dropZoneBehaviorClass defined above gets registered with Ajax Extensions.  The first parameter of the registerClass method takes the name of the class.  The second parameter takes the base class.  The remaining parameters take the interfaces that are implemented by the new class.  The line following this throws a custom event indicating that the script has completed loading (this is needed in order to support Safari, which does not do this natively).  Now back to the top, the “Type.registerNamespace” method allows you to register your custom namespace.  The next line declares our new class using an anonymous method syntax.  This is a way of writing JavaScript that I am not particularly familiar with, but is very important for making JavaScript object oriented, and is essential for designing Atlas behaviors.  Within the anonymous method, the class methods initialize, dispose, and getDescriptor are simply standard methods used for all behavior classes, and in this implementation, all you need to do is call the base method (that is, the method of the base class that you specify in the second to last line of this code sample.)  The only thing special you do is to register the drop target with the Sys.Preview.UI.DragDropManager in the initialize method.  This is the act that makes much of the drag drop magic happen.

Next, you implement the IDropTarget methods.  In this example, you are only implementing two methods, “this.canDrop” and “this.drop”.  For “canDrop”, you are just going to return true.  More interesting logic can be placed here to determine which floating div tags can actually be dropped on a given target, and even to determine what sorts of floating divs will do what when they are dropped, but in this case you only want a bare-bones implementation of  IDropTarget that will allow any floating div to be dropped on it.   Your implementation of the “drop” method is similarly bare bones.  When a floating element is dropped on one of your drop targets, an alert message will be thrown indicating that something has occurred.  And that’s about it.  You now have a drop behavior that works with the floating behavior we used in the previous examples.

You should now write up a page to show off your new custom dropzone behavior.  You can build on the previous samples to accomplish this.  In the Script Manager, besides registering the PreviewDragDrop script, you will also want to register your new DropZoneBehavior script:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript” />
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop” />
        <asp:ScriptReference Path=”scriptLibrary/DropZoneBehavior.js” />
    </Scripts>
</asp:ScriptManager>


Next, you will want to add a new div tag to the HTML body, that can be used as a drop target:



<div style=”background-color:Red;height:200px;width:200px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>
<div id=”dropZone” style=”background-color:cornflowerblue;height:200px;width:200px;”>
    Drop Zone
</div>


Finally, you need to add a declarative markup element to add your custom DropZone behavior to the div you plan to use as a dropzone element. The XML markup should look like this:



<script type=”text/xml-script”>
    <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″ xmlns:JavaScript=”Custom.UI”>
<components>
<control id=”dropZone”>
                <behaviors>
                    <JavaScript:DropZoneBehavior/>
                </behaviors>
            </control>
<control id=”draggableDiv”>
                <behaviors>
                    <floatingBehavior handle=”handleBar”/>
                </behaviors>
            </control>
        </components>
    </page>
</script>


The code you have just written should basically add a drop zone to the original declarative drag and drop example.  When you drop your drag element on the drop zone, an alert message should now appear.  You can expand on this code to make the drop method of your custom dropzone behavior do much more interesting things, such as firing off other javascript events in the current page or even calling a webservice, using ASP.NET Ajax, that will in turn process server-side code for you. 

III. ASP.NET Ajax Dynamic Drag and Drop


 



Since the declarative model is much cleaner than the imperative model, why would you ever want to write your own javascript to handle Ajax Extensions behaviors?  You might want to roll your own javascript if you want to add behaviors dynamically.  One limitation of the declarative model is that you can only work with objects that are initially on the page.  If you start adding objects to the page dynamically, you cannot add the floating behavior to them using the declarative model.  With the imperative model, on the other hand, you can.

Building on the previous example, you will replace the “pageLoad()” function with a function that creates floating divs on demand.  The following javascript function will create a div tag with another div tag embedded to use as a handlebar, then insert the div tag into the current page, and finally add floating behavior to the div tag:


function createDraggableDiv() {
var panel= document.createElement(“div”);
panel.style.height=“100px”;
panel.style.width=“100px”;
panel.style.backgroundColor=“Blue”;
var panelHandle = document.createElement(“div”);
panelHandle.style.height=“20px”;
panelHandle.style.width=“auto”;
panelHandle.style.backgroundColor=“Green”;
panel.appendChild(panelHandle);
var target = $get(‘containerDiv’).appendChild(panel);
addFloatingBehavior(panel, panelHandle);
}

You will then just need to add a button to the page that calls the “createDraggableDiv()” function. The new HTML body should look something like this:


<input type=”button” value=”Add Floating Div” onclick=”createDraggableDiv();” />
<div id=”containerDiv” style=”background-color:Purple;height:800px;width:600px;”/>

This will allow you to add as many draggable elements to your page as you like, thus demonstrating the power and flexibility available to you once you understand the relationship between using Ajax Extensions declaratively and using it programmatically.  As a point of reference, here is the complete code for the dynamic drag and drop example:



<%@ Page Language=”C#”  %>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html xmlns=”http://www.w3.org/1999/xhtml” >
<head runat=”server”>
<title>Imperative Drag and Drop II</title>
<script type=”text/javascript”>
function createDraggableDiv() {
     var panel = document.createElement(“div”);
     panel.style.height=”100px”;
     panel.style.width=”100px”;
     panel.style.backgroundColor=”Blue”;
     var panelHandle = document.createElement(“div”);
     panelHandle.style.height=”20px”;
     panelHandle.style.width=”auto”;
     panelHandle.style.backgroundColor=”Green”;
     panel.appendChild(panelHandle);
     var target = $get(‘containerDiv’).appendChild(panel);
     addFloatingBehavior(panel, panelHandle);
     }
function addFloatingBehavior(ctrl, ctrlHandle){
     $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);
     }
</script>
</head>
<body>
<form id=”form1″ runat=”server”>
<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
<Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
 <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
</Scripts>
</asp:ScriptManager>
<h2>Imperative Drag and Drop Code with javascript: demonstrate dynamic loading of behaviors</h2>
<input type=”button” value=”Add Floating Div” onclick=”createDraggableDiv();” />
<div id=”containerDiv” style=”background-color:Purple;height:800px;width:600px;”/>
</form>
</body>
</html>

II. ASP.NET Ajax Imperative Drag and Drop


 



To accomplish the same thing using a programmatic model requires a bit more code, but not much more.  It is important to understand that when you add an Ajax Extensions Script Manager component to your page, you are actually giving instructions to have the Ajax Extensions javascript library loaded into your page.  The Ajax Extensions library, among other things, provides client-side classes that extend the DOM and provide you with tools that allow you to code in a browser agnostic manner (though there currently are still issues with Safari compatibility).  These client-side classes also allow you to add behaviors to your html elements.

To switch to an imperative model, you will need to replace the XML markup with two javascript functions.  The first one is generic script to add floating behavior to an html element.  It leverages the Ajax Extensions client-side classes to accomplish this:



<script type=”text/javascript”>
        function addFloatingBehavior(ctrl, ctrlHandle){
              $create(Sys.Preview.UI.FloatingBehavior, {‘handle’: ctrlHandle}, null, null, ctrl);


              }
</script>



The function takes two parameter values; the html element that you want to make draggable, and the html element that is the drag handle for the dragging behavior.  The new $create function encapsulates the instantiation and initialization routines for the behavior.  The addFloatingBehavior utility function will be used throughout the rest of this tutorial.

Now you need to call the “addFloatingBehavior” function when the page loads.  This, surprisingly, was the hardest part about coding this example.  The Script Manager doesn’t simply create a reference to the Ajax Extensions javascript libraries, and I have read speculation that it actually loads the library scripts into the DOM.  In any case, what this means is that the libraries get loaded only after everything else on the page is loaded.  The problem for us, then, is that there is no standard way to make our code that adds the floating behavior run after the libraries are loaded; and if we try to run it before the libraries are loaded, we simply generate javascript errors, since all of the Ajax Extensions methods we call can’t be found.

There are actually a few workarounds for this, but the easiest one is to use a custom Ajax Extensions event called “pageLoad()” that  only gets called after the libraries are loaded.  To add the floating behavior to your div tag when the page is first loaded (but after the library scripts are loaded) you just need to write the following:


<script type=“text/javascript”>
function pageLoad(){
addFloatingBehavior(document.getElementById(‘draggableDiv’),
document.getElementById(‘handleBar’));
}
</script>

which, in turn, can be written this way, using an Ajax Extensions scripting shorthand that replaces “document.getElementById()” with “$get()“:


<script type=“text/javascript”>
function pageLoad(){
addFloatingBehavior($get(‘draggableDiv’),$get(‘handleBar’));
}
</script>

And once again, you have a draggable div that behaves exactly the same as the draggable div you wrote using the declarative model.

I. ASP.NET Ajax Declarative Drag and Drop

 


The first task is to use XML markup to add drag-drop behavior to a div tag. By drag and drop, I just mean the ability to drag an object and the have it stay wherever you place it.  The more complicated behavior of making an object actually do something when it is dropped on a specified drop target will be addressed later in this tutorial.  To configure your webpage to use ASP.NET Ajax, you will need to install the Microsoft.Web.Extensions.dll into your Global Assembly Cache.  You will also need a reference to the library Microsoft.Web.Preview.dll.  Finally, you will need to configure your web.config file with the following entry:



<system.web>
    <pages>
        <controls>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral,  PublicKeyToken=31bf3856ad364e35″ />
            <add tagPrefix=”asp” namespace=”Microsoft.Web.UI.Controls” assembly=”Microsoft.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
            <add tagPrefix=”asp” namespace=”Microsoft.Web.Preview.UI” assembly=”Microsoft.Web.Preview” />
        </controls>
    </pages>
</system.web>


You will need to add an Atlas Script Manager control to your .aspx page and configure it to use the PreviewDragDrop library file:



<asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
    <Scripts>
        <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewScript.js” />
 <asp:ScriptReference Name=”Microsoft.Web.Resources.ScriptLibrary.PreviewDragDrop.js” />
    </Scripts>
</asp:ScriptManager>


Add the div object you want to make draggable, and make sure it has a drag handle:



<div style=”background-color:Red;height:800px;width:600px;”>
    <div id=”draggableDiv” style=”height:100px;width:100px;background-color:Blue;”>
        <div id=”handleBar” style=”height:20px;width:auto;background-color:Green;”>
        </div>
    </div>
</div>


Finally, add the markup script that will make your div draggable:



<script type=”text/xml-script”>
    <page xmlns:script=”http://schemas.microsoft.com/xml-script/2005″>
        <components>
            <control id=”draggableDiv”>
                <behaviors>
                    <floatingBehavior handle=”handleBar”/>
                </behaviors>
            </control>
        </components>
    </page>
</script>


And with that, you should have a draggable div tag.  The example demonstrates the simplicity and ease of using the declarative model with Ajax Extensions.  In the terminology being introduced with Ajax Futures, you have just used declarative markup to add the floating behavior to an html element.