The Open Internet and Its Enemies


Crazyfinger makes an interesting comment on Jeff Jarvis’s blog.




Deadwood. The blogosphere of today feels like that town, with its own version of Swearengens, E. B. Farnums…


There is a lot of background to this, worth unpacking; it can all be distilled, however, to the observation that people are sometimes mean on the internet.


The long version goes something like this.  Kathy Sierra, who is an admired web design guru, Web 2.0 advocate, and co-author of the immensely popular Head First series of technical books, has a blog.  And recently people started making obnoxious comments on her blog, obnoxious comments on other blogs about her, works of Photoshop clipping involving her, and finally death threats.  She is now considering getting out of the blogosphere altogether, a dramatic instance of Gresham’s law at work.  In the meantime, however, it turns out she has some notable friends who are now trying to use their influence to do something about the netnasties.  Tim O’Reilly, who runs a successful technical press and also helped coin the term Web 2.0, proposes a blogger code of conduct to which bloggers can sign on as a mark of their bona fides.


In other milieus, this mild suggestion of self-regulation would seem perfectly reasonable, but the internet is not just any milieu.  It has mythic origins as an unregulated medium for the transmission of ideas and great hopes — democratic ideals, anarchic utopias, freedom of speech, freedom of expression — are tied to it.  The wildness of the internet contributes to its appeal.  Like the American frontier, it is a terrain where anyone can re-create themselves, and build a new culture in which they can happily dwell.


This conjoining of freedom, the Internet, the blogosphere, Web 2.0, and the Open Source movement was at one time promoted by the same people who are finding problems with it now.  In a 2006 commencement speech at UC Berkeley’s School of Information, Tim O’Reilly said:




The internet has enormous power to increase our freedom. It also has enormous power to limit our freedom, to track our every move and monitor our every conversation. We must make sure that we don’t trade off freedom for convenience or security.


In his own explication of what his neologism Web 2.0 meant, O’Reilly wrote:




If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.


Kathy Sierra has been more consistent in her view of the openness of the Internet.  In 2005 she discussed the enforcement of “be-nice” rules on a forum she started.




Enforcing a “be nice” rule is a big commitment and a risk. People complain about the policy all the time, tossing out “censorship” and “no free speech” for starters. We see this as a metaphor mismatch. We view javaranch as a great big dinner party at the ranch, where everyone there is a guest. The ones who complain about censorship believe it is a public space, and that all opinions should be allowed. In fact, nearly all opinions are allowed on javaranch. It’s usually not about what you say there, it’s how you say it.


And this isn’t about being politically correct, either. It’s a judgement call by the moderators, of course. It’s fuzzy trying to decide exactly what constitutes “not nice”, and it’s determined subjectively by the culture of the ranch.


At the same time, it was also she who pointed out, quite accurately, this principle of the Internet:




If we want our users (members, guests, students, potential customers, kids, co-workers, etc.) to pay attention, we have to be provocative. We can moan all we want about how the responsible person should pay attention to what’s important rather than what’s compelling. But it’s not about responsibility or maturity. It’s not even about interest.



Provocation is in the eye of the provoked, obviously, so there’s no clear formula. But there’s plenty we can try, depending on the circumstances….


These notions of the Internet age as herald to a new form of social interaction even permeates seemingly unrelated movements like the  Agile Methodology for software development, which promotes:




Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan


Even the Open Source movement, which promotes a particular way of distributing software, includes these interesting stipulations in the license they promulgate:




5. No Discrimination Against Persons or Groups

The license must not discriminate against any person or group of persons.


6. No Discrimination Against Fields of Endeavor

The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.


By open, they truly mean open.  Given this emphasis on ideals of individuality, freedom, and equality in Internet culture, it can be seen why any suggestion that it might be in everyone’s interest to curtail any of these is seen as anathema.  It also explains the strange bind those working on O’Reilly’s proposed blogger’s code of conduct find themselves in.  Once it was agreed upon that some sort of action should be taken to deal with the netnasties, it was discovered that nothing could really be done without enforcement, and no one wants enforcement since it is a form of coercion.  Consequently, the code of conduct has turned out to be a document that offends half of the Internet by suggesting mild coercion in the first place, and then draws the dirision of the other half by having no teeth.  The draft code of conduct is currently in a state of flux, and may change radically over the next several weeks.  At one point, however, it included an article that stated that bloggers don’t take themselves seriously.  The intent of this was somewhat lost to me, but the irony was not.  Bloggers don’t take themselves seriously, and yet they feel they need a code of conduct to explicate what they believe in, including the tenet that they don’t take themselves seriously.


As it is shaping up, though, the code of conduct ressembles to a remarkable degree the bylaws of various community forums across the Internet.  What separates forums from blogs is, primarily, that forums are composed of people who consent to obey the oversight of moderators as a mechanism for regulating discussions.  Blogs, on the other hand, are visible and generally accessible to everyone.  Forums usually have mechanisms in place to eject members who repeatedly behave badly.  The Internet has no such mechanism.  Finally, forums usually only provide one with an audience of a hundred to a thousand people.  Blogs offer a potential audience of hundreds of millions of people.


It is likely for this last reason that many people have turned to blogs, rather than forums, as their main outlet for Internet discourse.  If gold was the main currency of the Old West, attention is the main currency of the new frontier — or, as advertisers like to call it, eyeballs.  The public life of many people on the Internet involves acquiring eyeballs, which can then be converted to real money if one chooses to advertise on one’s site, or else may simply be used as a mode of social promotion.  Returning to moderated forums is akin to returning to the towns back East where laws were more stringent, and safety more assured, but opportunities for advancement and transformation were limited.  The blogosphere holds out the promise that anyone can be famous if they turn the right phrase, capture the right attention, come up with the next big idea.

For these very reasons, however, the rules of a community cannot be enforced where no laws exist.  How then does justice get enforced on the digital frontier?


As Crazyfinger (who has an interesting blog of his own about Adam Smith’s Theory of Moral Sentiments) suggests, an analogy can be drawn between the TV drama Deadwood and the current state of the Internet.  Had Deadwood survived another season, we might even have received a definitive answer, but as it is, we only have suggestions.


Deadwood, in the show of the same name, is a gold town in South Dakota marching slowly toward incorporation and civilization.  Alma Garret (a proxy for Kathy Sierra) has accidentally struck it rich when her lot is discovered to contain one of the richest gold veins in the region.  As a consequent, she is a victim of unscrupulous persons anxious to take hold of her, ermm, eyeballs.  Having neither reputation to lose nor character to restrain them, they act provacatively in their efforts to raise their own social status.


Throughout the series, three main ways are provided to afford Alma Garret the protection of civilization she requires, in a place without civilization.  The first is Wild Bill Hickock, who through the authority granted him by his reputation, is able to coerce people to behave appropriately.  This is analogous to the the attempt by Kathy Sierra’s friend Tim O’Reilly, as well as others, to use their reputations to shame people into agreeing to some sort of blogging standards.  Sadly, the attempt is also analogous to the letter written by the town fathers in Season Three in the local paper to turn sentiment against George Hearst, who has designs on Alma’s gold.  Wild Bill, of course, is shot at the end of Season One.  Best character on TV ever.  Nuff said.


Alma Garret’s second line of defense is Sheriff Seth Bullock.  Bullock is of heroic proportions.  Through the exercise of precise and barely restrained violence, Bullock is able to herd and intimidate those who would upset the peace of Deadwood.  Bullock is the equivalent of the sort of hero we occassionally encounter in forums and message boards, who through wit, knowledge, and force of character is able both to inspire people to behave better as well as punish with an acid tongue those who do not.  Alas, on the Internet, there are too few of these, and the few there are tend to retreat into their own preoccupations over time.  A case again, perhaps, of Gresham’s Law: bad money drives good money out of circulation.


The last option Alma Garret has at her disposal is to accede to the wishes of the villainous and violent George Hearst on the best terms she is able.  This is what Alma Garret does at the end of Season Three, rather than force a confrontation that would likely see many of the main characters killed off.  This, as far as I can see, is the only way to bring civilization to the blogosphere, and it is an unhappy turn.  Civilization, once we accept that there will always be netnasties, is only possible when we turn a monopoly of coercive power over to a single entity.  If, as O’Reilly, Sierra and others have argued, it is necessary to make the blogosphere and the Internet, by extension, obey the rules of a community forum, then something like this must occur.  The most likely way for this to happen is if one of the main social networking sites joins forces with one of the major blogging hosts, such as Typepad or LiveJournal, and compels everyone who wants to blog to sign on to their service, following the other principle of the Internet that growth engenders growth.  Having acquired a majority share of the blogosphere, such a monopolistic regime can then enforce community rules such as the ones Tim O’Reilly is attempted to formulate.  This entails the victory of Hearst and of civilazation, and the closing of the frontier.


And, I think, it is also the moral of Deadwood, if there is a moral to be found.  The frontier gives us heroes, but it also engenders monsters.  The idealized vision of the frontier must at some point be confronted with the ugliness it foments; a place of greed, corruption, mysogeny, pornography and guys who say “cocksucker”.  If we can’t find it in ourselves to embrace the ugliness along with the heroism of the frontier, then we must make the great compromise and bear the yoke which assures civility, and which, Rousseau promises, is only a light yoke, after all.

Two Kinds of Jargon


I had taken it for granted that “Web 2.0” is simply a lot of hype until I came across this defense of the term by Kathy Sierra by way of Steve Marx’s blog.  Kathy Sierra argues that “Web 2.0” is not simply a buzzword because it is, in fact, jargon.  She goes on to explore the notion of jargon and to explain why jargon is actually a good thing, and shamefully maligned.  This, I thought, certainly goes against the conventional wisdom. 


In my various careers, I have become intimately familiar with two kinds of jargon: academic jargon and software jargon.  I will discuss academic jargon first, and see if it sheds any light on software jargon.  The English word jargon is derived from the Old French word meaning “a chattering,” for instance of birds.  It is generally used somewhat pejoratively, as in this sentence from an article by George Packer in the most recent New Yorker concerning the efforts of anthropologists to make the “war on terror” more subtle as well as more culturally savvy:



One night earlier this year, Kilcullen sat down with a bottle of single-malt Scotch and wrote out a series of tips for company commanders about to be deployed to Iraq and Afghanistan.  He is an energetic writer who avoids military and social-science jargon, and he addressed himself intimately to young captains who have had to become familiar with exotica such as “The Battle of Algiers,” the 1966 film documenting the insurgency against French colonists.


 


In this passage, jargon is understood as a possibly necessary mode of professional language that, while it facilitates communication within a professional community, makes the dissemination of ideas outside of that community of speakers difficult.


Even with this definition, however, one can see how there is a sense in which the use of professional jargon is not a completely bad thing, but is in fact a trade-off.  While it makes speaking between professional communities difficult, as well as initiation into such a community difficult — for instance the initiation of young undergraduates into philosophical discourse–, once one is initiated into the argot of a professional community, the special language actually facilitates communication by serving as a short-hand for much larger concepts and by increasing the precision of the terms used within the community, since non-technical language tends to be ambiguous in a way that technical jargon, ideally, is not.  Take, for instance, the following sentences:



The question about that structure aims at the analysis of what constitutes existence. The context of such structures we call “existentiality“. Its analytic has the character of an understanding which is not existentiell, but rather existential. The task of an existential analytic of Dasein has been delineated in advance, as regards both its possibility and its necessity, in Dasein’s ontical constitution.


 


This passage is from the beginning of Martin Heidegger’s Being and Time, as translated by John Macquarrie and Edward Robinson.  To those unfamiliar with the jargon that Heidegger develops for his existential-phenomenology, it probably looks like balderdash.  One can see how potentially, with time and through reading the rest of this work, one might eventually come to understand Heidegger’s philosophical terms.  Jargon, qua jargon, is not necessarily bad, and much of the bad rap that jargon gets is often due to the resistance to comprehension and the sense of intellectual insecurity it engenders when one first encounters it.  Here is another example of jargon I pulled from a recent technical post on www.beyond3d.com called Origin of Quake3’s Fast InvSqrt():



The magic of the code, even if you can’t follow it, stands out as the i = 0x5f3759df – (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.


 


I don’t understand what the author of this passage is saying, but I do know that he is enthusiastic about it and assume that, as with the Heidegger passage, I can come to understand the gist of the argument given a week and a good reference work.  I also believe that the author is trying to say what he is saying in the most precise and concise way he is able, and this is why he resorts to one kind of  jargon to explain something that was originally written in an even more complicated technical language: a beautiful computer algorithm.


However there is another, less benign, definition for jargon that sees its primary function not in clarifying concepts, but in obfuscating them.  According to Theodor Adorno, in his devastating and unrelenting attack on Heidegger in The Jargon of Authenticity, jargon is “a sublanguage as superior language.”  For Adorno jargon, especially in Heidegger’s case, is an imposture and a con.  It is the chosen language of charlatans. Rudolf Carnap makes a similar, but not so brutal, point in section 5 of his “Overcoming Metaphysics” entitled “Metaphysical Pseudo-Sentences”, where he takes on Heidegger’s notorious sentence from Being and Time, “Das Nichts selbst nichtet” (Nothingness itself nothings) for its meaninglessness.


We might be tempted to try to save jargon from itself, then, by distinguishing two kinds of jargon: good jargon and bad jargon.  Turning to distinctions is at least as old as the use of jargon in order to clarify ideas, and goes back as far as, if not farther than, Phaedrus’s distinction between the heavenly and the common Aphrodites in Plato’s The Symposium.  With Phaedrus we can say that the higher and the baser jargon can be distinguished, as he distinguishes two kinds of love, by the intent of the person using jargon.  When jargon is used in order to clarify ideas and make them precise, then we are dealing with proper jargon.  When jargon is used, contrarily, to obfuscate, or to make the speaker seem smarter than he really is, then this is deficient or bad jargon.


There are various Virgin and the Whore problems with this distinction, however, not least of which is how to tell the two kinds of jargon apart.  It is in fact rather rare to find instances of bad jargon that everyone concedes is bad jargon, with the possible exception of hoaxes like the Sokal affair, in which physicist Alan Sokal wrote a jargon laden pseudo-paper about post-modernism and quantum mechanics, and got it published in a cultural studies journal.  Normally, however, when certain instances of jargon are identified as “bad” jargon, we also tend to find defenders who insist that it is not and claim that, to the contrary, those calling it bad jargon simply do not understand it.  This is a difficulty not unlike one which a wit described when asked to define bad taste.  “Bad taste,” he said, “is the garden gnome standing in my neighbors front lawn.”  When asked to define good taste, the wit continued, “Good taste is that plastic pink flamingo standing in my lawn.”


There are more difficulties with trying to distinguish good jargon from bad jargon, such as cases where good jargon becomes bad over time, or even cases where bad jargon becomes good.  Cases of the latter include Schopenhauer’s reading of a popular and apparently largely incorrect account of of Indian philosophy and then absorbing this into his own very insightful and influential philosophical project.  Georges Bataille’s misreading of Hegel and Jacques Lacan’s misreading of Freud also bore impressive fruit.  Finally, there’s the (probably apocryphal) story of the student of Italian who approached T.S. Eliot and began asking him about his peculiar and sometimes incorrect use of Italian in his poetry, until Eliot finally broke off the conversation with the admission, “Okay, you caught me.”   Cases such as these undermine the common belief that it is intent, or origins, which make a given jargon good or bad.


The opposite can, of course, also happen.  Useful jargon may, over time, become bad and obfuscating.  We might then say that while the terms used in Phenomenology proper are difficult but informative, they were corrupted when Heidegger took them up in his Existential-Phenomenology, or we might say that Heidegger’s jargon is useful but later philosophers influenced by his philosophy such as Derrida and the post-structuralists corrupted it, or finally we might even say that Derrida got it right but his epigones in America were the ones who ultimately turned his philosophical insights into mere jargon.  This phenomenon is what I take Martin Fowler to be referring to in his short bliki defense of the terms Web 2.0 and Agile entitled Semantic Diffusion.  According to Fowler:



Semantic diffusion occurs when you have a word that is coined a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely – and with it any usefulness to the term.


Thus Fowler takes up Kathy Sierra’s defense of Web 2.0 as jargon, recognizes some weaknesses in this explanation, and then fortifies the defense of the term with the further explanation that while the term may be problematic now, it was right in its origins, and pure in its intent.


Fowler here makes a remarkably Heideggerian observation.  Heidegger was somewhat obsessed with language and believed that language tends, over time, to hide and obfuscate meaning, when it should rather shed light on things.  Along this vein,  Being and Time begins with the claim that we today no longer understand the meaning of Being, and that this forgetting is so thorough that we are not even any longer aware of this absence of understanding, so that even the question “What Is Being?”, which should be the most important question for us, is for the most part ignored and overlooked.  To even begin understanding Being, then, we must first try to understand the meaning of the question of Being.  We must first come to the realization that there is even a problem there in the first place which needs to be resolved.  Heidegger’s chosen solution to this problem involves the claim that while language conceals meaning, it also, in its origins, is able to reveal it if we are able to come to understand language correctly.  He gives an example with the term aletheia, which in Greek means truth.  By getting to the origins of language and the experience of language, we can reveal aletheia. Aletheia, etymologically, means not-forgetting (thus the river Lethe is, in Greek mythology, the river of forgetting that the dead must cross before resting in Hades), and so the truth is implicitly an unconcealment that recovers the meanings implicit in language.  The authentic meaning of jargon, Fowler similarly claims, can be arrived at if we remove the accretions caused by “semantic diffusion” and get back to the original intent.


But is this true?  Do apologetics for terms such as “Web 2.0” and “Agile” insisting that they are “jargon” ultimately succeed?  Do such attempts reveal the original intent implicit in the coining of these terms or do they simply conceal the original meanings even further?


My personal opinion is that jargon, by its nature, never really reveals, but always in one way or another, by condensing thought and providing a shorthand for ideas, conceals.  It can of course be useful, but it can never be instructive, but rather gives us a sense that we understand things we do not understand simply because we know how to use a given jargon.  At best, jargon can be used as an indicator that points to a complex of ideas shared by a given community.  At worse, it is used as shorthand for bad or incoherent ideas that never themselves get critical treatment because the jargon takes the place of ideas, and becomes mistaken for ideas.  This particularly seems to be the case with the defense of “Web 2.0” and “Agile” as “jargon”, as if people have a problem with the terms themselves rather than what they stand for.  “Jargon”, as a technical term, is not particularly useful.  It is to some extent already corrupt from the get-go.


One way around this might be to simply stop using the term “jargon”, whether bad or good, when discussing things like Web 2.0 and Agile.  While it is common in English to use Latin derived terms for technical language and Anglo-Saxon words for common discourse, in this case we might be obliged to make the reverse movement as we look for an adequate replacement term for “jargon”.


In 2005, the Princeton philosopher Harry Frankfurt published a popular pamphlet called On Bullshit that attempts to give a philosophical explanation of the term. On first blush, this title may seem somewhat prejudicial, but I think that, as with jargon, if we get away from pre-conceived notions as to whether the term is good or bad, it will be useful as a way to get a fresh look at the term we are currently trying to evaluate, “Web 2.0”.  It can also be used most effectively if we do the opposite of what we did with “jargon”; jargon was first taken to appropriately describe the term “Web 2.0”, and then an attempt was made to understand what jargon actually was.  In this case, I want to first try to understand what bullshit is, and then see if it applies to “Web 2.0”.


Frankfurt begins his analysis with a brief survey of the literature on bullshit, which includes Max Black’s study of “humbug” and Augustine of Hippo’s analysis of lying.  From these, he concludes that bullshit and lying are different things, and as a preliminary conclusion, that bullshit falls just short of lying.  Moreover, he points out that it is all pervasive in a way that lying could never be.



The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.


Not satisfied with this preliminary explanation, however, Frankfurt identifies further elements that characterize bullshit, since there are many things that can fall short of a lie and yet, perhaps, not rise to the level of bullshit.  He then identifies inauthenticity as the hallmark that distinguishes bullshit from lies, on the one hand, and simple errors of fact, on the other.



For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.


 


It is not what a bullshitter says, then, that marks him as a bullshitter, but rather his state-of-mind when he says it.  For Frankfurt, bullshit doesn’t really even belong on the same continuum with truth and falsehood, but is rather opposed to both.  Like the Third Host in Dante’s Inferno, it is indifference to the struggle that ultimately identifies and marks out the class of bullshitters.


Again, there are echoes of Heidegger here.  According to Heidegger, we are all characterized by this “thrownness”, which is the essence of our “Being-In-The-World”.  In our thrownness, we do not recognize ourselves as ourselves, but rather as das Man, or as the they-self,



which we distinguish from the authentic Self – that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself.” And further “If Dasein discovers the world in its own way [eigens] and brings it close, if it discloses to itself its own authentic Being, then this discovery of the ‘world’ and this disclosure of Dasein are always accomplished as a clearing-away of concealments and obscurities, as a breaking up of the disguises with which Dasein bars its own way.


The main difference between Frankfurt’s and Heidegger’s analysis of authenticity, in this case, is that Frankfurt seems to take authenticity as normative, whereas Heidegger considers authenticity as the zero-point state of man when we are first thrown into the world.


For now, however, the difference isn’t all that important.  What is important is Frankfurt’s conclusion about the sources of bullshit.  At the end of his essay, Frankfurt in effect writes that there are two kinds of bullshit, one of which is defensible and one of which is not.  The indefensible kind of bullshit is based on a subjectivist view of the world which denies truth and falsity altogether (and here I take Frankfurt to be making a not too veiled attack on the relativistic philosophical disciplines that are based on Heidegger’s work).  The defensible form of bullshit — I hesitate to call it good bullshit — is grounded in the character of our work lives, which force us to work with and represent information that is by its nature too complex for us to digest and promulgate accurately.  This, I take it, is the circumstance academic lecturers and others frequently find themselves in, as the stand behind the podium and are obliged to talk authoritatively about subjects they do not feel up to giving a thorough, much less an authentic, account of.



Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.


 


This class of speech is the result of our inability to apply Wittgenstein’s dictum, “Whereof one cannot speak, thereof one must be silent.”  There are times when we are not in a position to remain silent, and so are obligated to bullshit.  Bullshit, in these cases, is a way of making the best of our situation.


Per the original arrangement, it is now time to put  “bullshit” to the test and see if either cynical bullshit or benign bullshit can be ascribed to the term “Web 2.0”.  For better or worse, I am going to use Jeffrey Zeldman’s blog on Web 2.0 (titled, confusingly enough, “Web 3.0”) as the main text for this analysis.  Zeldman is generally sympathetic to the ideas and phenomena the “Web 2.0” is meant to encompass, but he also points out the aspects of the term that grate.  The most salient is the degree to which it smells like a sales pitch.


 



It soon appeared that “Web 2.0” was not only bigger than the Apocalypse but also more profitable. Profitable, that is, for investors like the speaker. Yet the new gold rush must not be confused with the dot-com bubble of the 1990s:

“Web 1.0 was not disruptive. You understand? Web 2.0 is totally disruptive. You know what XML is? You’ve heard about well-formedness? Okay. So anyway—”

And on it ran, like a dentist’s drill in the Gulag.


Zeldman associates Web 2.0 with marketing, which Frankfurt in turn associates with bullshit.  Frankfurt even goes so far as identifying sales and its related disciplines as “the most indisputable and classic paradigms of the concept.”  Moreover, the defense that Web 2.0 describes a real phenomenon, as Fowler insists and Zeldman grants, doesn’t make it not bullshit, since Frankfurt concedes that bullshit can just as well be true as false.  What is important is the authenticity or inauthenticity of the original claim, and the sense that something is a sales pitch is already an indication that something inauthentic is going on.  So “Web 2.0” certainly meets Frankfurt’s criteria for bullshit.


The more important question is what kind of bullshit is it?  Is it benign, or cynical?  According to Frankfurt’s distinction, again, the difference is whether the bullshit is grounded in the nature of one’s work or rather in some sort of defect of epistemic character.


Here the answer is not so simple, I think, since software has two strands, one going to the hobbyist roots of programming, and the other to the monetizing potential of information technology.  Moreover, both strands tend to struggle within the heart of the software engineering industry, with the open source movement on the one hand (often cited as one key aspect of Web 2.0) emblematic of the purist strain, and the advertising prospects on the other (with Google in the vanguard, often cited as a key exemplar of the Web 2.0 phenomena) symbolic of the notion that a good idea isn’t enough — one also has to be able to sell one’s ideas.


Software programming, in its origins, is a discipline practiced by nerds.  In other words, it is esoteric knowledge, extremely powerful, practiced by a few and generally misunderstood by the majority of people.  As long as there is no desire to explain the discipline to outsiders, there is no problem with treating software programming as a hobby.  At some point, however, every nerd wants to be appreciated by people who are not his peers, and to accomplish this, he is forced to explain himself and ultimately to sell himself.  The turning point for this event is well documented, and occurred on February 3rd, 1973, when Bill Gates wrote an open letter to the hobbyist community stating that software had economic value and that it was time for people to start paying for it. 


This was a moment of triumph for nerds everywhere, though this was not at first understood, and still generates resentment to this day, because it irrevocably transformed the nature of software programming.  Once software was recognized as something of economic value, it also became clear that software concepts now had to be marketed.  The people who buy software are typically unable to distinguish good software from bad software, and so it becomes the responsibility of those who can to try to explain why their software is better in terms that are not, essentially, technical.  Instead, a hybrid jargon-ridden set of terms had to be created in order to bridge the gap between software and the business appetite for software.  Software engineers, in turn, learned to see selling themselves, to consumers, to their managers, and finally to their peers, as part of the job of software engineering — though at the same time, this forced obligation to sell themselves continues to be regarded with with suspicion and resentment.  The hope held out to such people is that through software they will eventually be able to make enough money, as Bill Gates did, as Steve Jobs did, to finally give up the necessity of selling themselves and return to a pure hobbyist state-of-mind once again.  They in effect want to be both the virgin and the whore.  This is, of course, a pipe dream.


Consequently, trying to determine whether Web 2.0 is benign bullshit or cynical bullshit is difficult, since sales both is and is not an authentic aspect of the work of software engineering.  What seems to be the case is that Web 2.0 is a hybrid of benign and cynical bullshit.  This schizophrenic character is captured in the notion of Web 2.0 itself, which is at the same time a sales pitch as well as an umbrella term for a set of contemporary cultural phenomena.


Now that we know what bullshit is, and we know that Web 2.0 is bullshit, it is time to evaluate what Web 2.0 is.  In Tim O’Reilly’s original article that introduced the notion of Web 2.0, called appropriately What Is Web 2.0, O’Reilly suggests several key points that he sees as typical of the sorts of things going on over the past year at companies such as Google, Flikr, YouTube and Wikipedia.  These observations include such slogans as “Harnessing Collective Intelligence”, “Data is the Next Intel Inside” and “End of the Software Release Cycle”.  But it is worth asking if these really tell us what Web 2.0 is, or if they are simply ad hoc attempts to give examples of what O’Reilly says is a common phenomenon?  When one asks for the meaning of terms such as Web 2.0, what one really wants is the original purpose behind coining the term.  What is implicit in the term Web 2.0, as Heidegger would put it, that at the same time is concealed by the language typically used to explain Web 2.0.


As Zeldman points out, one key (and I think the main key) to understanding Web 2.0 is that it isn’t Web 1.0.  The rise of the web was marked by a rise in bluff and marketing that created what we now look back on as the Internet Bubble.  The Internet Bubble, in turn, was a lot of marketing hype and the most remarkable stream of jargon used to build up a technology that, in the end, could not sustain the amount of expectation with which it was overloaded.  By 2005, this bad reputation that had accrued to the Web from the earlier mistakes had generated a cynicism about the new things coming along that really were worthwhile — such as the blogging phenomenon, Ajax, Wikipedia, Google, Flikr and YouTube.  In order to overcome the cynicism, O’Reilly coined a term that, successfully, distracted people from the earlier debacle and helped to make the Internet a place to invest money once again.  Tim O’Reilly, even if his term is bullshit, as we have already demonstrated above, ultimately has done us all a service by clearing out all the previous bullshit.  In a very Heideggerian manner, he made a clearing [Lichtung] for the truth to appear.  He created, much as Heidegger attempted to do for Being, a conceptual space for new ideas about the Internet to make themselves apparent.


Or perhaps my analogy is overwrought. In any case, the question still remains as to what one does with terms that have outlived their usefulness.  In his introduction to Existentialism and Human Emotions, Jean-Paul Sartre describes the status the term “existentialism” has achieved by 1957.



Someone recently told me of a lady who, when she let slip a vulgar word in a moment of irritation, excused herself by saying, “I guess I’m becoming an existentialist.”



Most people who use the word would be rather embarrassed if they had to explain it, since, now that the word is all the rage, even the work of a musician or painter is being called existentialist.  A gossip columnist in Clartes signs himself The Existentialist, so that by this time the word has been so stretched and has taken on so broad a meaning, that it no longer means anything at all.


 


Sartre spent most of the rest of his philosophical career refining and defending the term “existentialism,” until finally it was superceded by post-structuralism in France. The term enjoyed a second life in America, until post-structuralism finally made the Atlantic crossing and superceded it there, also, only in turn to be first treated with skepticism, then with hostility, and finally as mere jargon.  It is only over time that an intellectual clearing can be made to re-examine these concepts.  In the meantime, taking a cue from Wittgenstein, we are obliged to remain silent over them.