Two Kinds of Jargon


I had taken it for granted that “Web 2.0” is simply a lot of hype until I came across this defense of the term by Kathy Sierra by way of Steve Marx’s blog.  Kathy Sierra argues that “Web 2.0” is not simply a buzzword because it is, in fact, jargon.  She goes on to explore the notion of jargon and to explain why jargon is actually a good thing, and shamefully maligned.  This, I thought, certainly goes against the conventional wisdom. 


In my various careers, I have become intimately familiar with two kinds of jargon: academic jargon and software jargon.  I will discuss academic jargon first, and see if it sheds any light on software jargon.  The English word jargon is derived from the Old French word meaning “a chattering,” for instance of birds.  It is generally used somewhat pejoratively, as in this sentence from an article by George Packer in the most recent New Yorker concerning the efforts of anthropologists to make the “war on terror” more subtle as well as more culturally savvy:



One night earlier this year, Kilcullen sat down with a bottle of single-malt Scotch and wrote out a series of tips for company commanders about to be deployed to Iraq and Afghanistan.  He is an energetic writer who avoids military and social-science jargon, and he addressed himself intimately to young captains who have had to become familiar with exotica such as “The Battle of Algiers,” the 1966 film documenting the insurgency against French colonists.


 


In this passage, jargon is understood as a possibly necessary mode of professional language that, while it facilitates communication within a professional community, makes the dissemination of ideas outside of that community of speakers difficult.


Even with this definition, however, one can see how there is a sense in which the use of professional jargon is not a completely bad thing, but is in fact a trade-off.  While it makes speaking between professional communities difficult, as well as initiation into such a community difficult — for instance the initiation of young undergraduates into philosophical discourse–, once one is initiated into the argot of a professional community, the special language actually facilitates communication by serving as a short-hand for much larger concepts and by increasing the precision of the terms used within the community, since non-technical language tends to be ambiguous in a way that technical jargon, ideally, is not.  Take, for instance, the following sentences:



The question about that structure aims at the analysis of what constitutes existence. The context of such structures we call “existentiality“. Its analytic has the character of an understanding which is not existentiell, but rather existential. The task of an existential analytic of Dasein has been delineated in advance, as regards both its possibility and its necessity, in Dasein’s ontical constitution.


 


This passage is from the beginning of Martin Heidegger’s Being and Time, as translated by John Macquarrie and Edward Robinson.  To those unfamiliar with the jargon that Heidegger develops for his existential-phenomenology, it probably looks like balderdash.  One can see how potentially, with time and through reading the rest of this work, one might eventually come to understand Heidegger’s philosophical terms.  Jargon, qua jargon, is not necessarily bad, and much of the bad rap that jargon gets is often due to the resistance to comprehension and the sense of intellectual insecurity it engenders when one first encounters it.  Here is another example of jargon I pulled from a recent technical post on www.beyond3d.com called Origin of Quake3’s Fast InvSqrt():



The magic of the code, even if you can’t follow it, stands out as the i = 0x5f3759df – (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.


 


I don’t understand what the author of this passage is saying, but I do know that he is enthusiastic about it and assume that, as with the Heidegger passage, I can come to understand the gist of the argument given a week and a good reference work.  I also believe that the author is trying to say what he is saying in the most precise and concise way he is able, and this is why he resorts to one kind of  jargon to explain something that was originally written in an even more complicated technical language: a beautiful computer algorithm.


However there is another, less benign, definition for jargon that sees its primary function not in clarifying concepts, but in obfuscating them.  According to Theodor Adorno, in his devastating and unrelenting attack on Heidegger in The Jargon of Authenticity, jargon is “a sublanguage as superior language.”  For Adorno jargon, especially in Heidegger’s case, is an imposture and a con.  It is the chosen language of charlatans. Rudolf Carnap makes a similar, but not so brutal, point in section 5 of his “Overcoming Metaphysics” entitled “Metaphysical Pseudo-Sentences”, where he takes on Heidegger’s notorious sentence from Being and Time, “Das Nichts selbst nichtet” (Nothingness itself nothings) for its meaninglessness.


We might be tempted to try to save jargon from itself, then, by distinguishing two kinds of jargon: good jargon and bad jargon.  Turning to distinctions is at least as old as the use of jargon in order to clarify ideas, and goes back as far as, if not farther than, Phaedrus’s distinction between the heavenly and the common Aphrodites in Plato’s The Symposium.  With Phaedrus we can say that the higher and the baser jargon can be distinguished, as he distinguishes two kinds of love, by the intent of the person using jargon.  When jargon is used in order to clarify ideas and make them precise, then we are dealing with proper jargon.  When jargon is used, contrarily, to obfuscate, or to make the speaker seem smarter than he really is, then this is deficient or bad jargon.


There are various Virgin and the Whore problems with this distinction, however, not least of which is how to tell the two kinds of jargon apart.  It is in fact rather rare to find instances of bad jargon that everyone concedes is bad jargon, with the possible exception of hoaxes like the Sokal affair, in which physicist Alan Sokal wrote a jargon laden pseudo-paper about post-modernism and quantum mechanics, and got it published in a cultural studies journal.  Normally, however, when certain instances of jargon are identified as “bad” jargon, we also tend to find defenders who insist that it is not and claim that, to the contrary, those calling it bad jargon simply do not understand it.  This is a difficulty not unlike one which a wit described when asked to define bad taste.  “Bad taste,” he said, “is the garden gnome standing in my neighbors front lawn.”  When asked to define good taste, the wit continued, “Good taste is that plastic pink flamingo standing in my lawn.”


There are more difficulties with trying to distinguish good jargon from bad jargon, such as cases where good jargon becomes bad over time, or even cases where bad jargon becomes good.  Cases of the latter include Schopenhauer’s reading of a popular and apparently largely incorrect account of of Indian philosophy and then absorbing this into his own very insightful and influential philosophical project.  Georges Bataille’s misreading of Hegel and Jacques Lacan’s misreading of Freud also bore impressive fruit.  Finally, there’s the (probably apocryphal) story of the student of Italian who approached T.S. Eliot and began asking him about his peculiar and sometimes incorrect use of Italian in his poetry, until Eliot finally broke off the conversation with the admission, “Okay, you caught me.”   Cases such as these undermine the common belief that it is intent, or origins, which make a given jargon good or bad.


The opposite can, of course, also happen.  Useful jargon may, over time, become bad and obfuscating.  We might then say that while the terms used in Phenomenology proper are difficult but informative, they were corrupted when Heidegger took them up in his Existential-Phenomenology, or we might say that Heidegger’s jargon is useful but later philosophers influenced by his philosophy such as Derrida and the post-structuralists corrupted it, or finally we might even say that Derrida got it right but his epigones in America were the ones who ultimately turned his philosophical insights into mere jargon.  This phenomenon is what I take Martin Fowler to be referring to in his short bliki defense of the terms Web 2.0 and Agile entitled Semantic Diffusion.  According to Fowler:



Semantic diffusion occurs when you have a word that is coined a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely – and with it any usefulness to the term.


Thus Fowler takes up Kathy Sierra’s defense of Web 2.0 as jargon, recognizes some weaknesses in this explanation, and then fortifies the defense of the term with the further explanation that while the term may be problematic now, it was right in its origins, and pure in its intent.


Fowler here makes a remarkably Heideggerian observation.  Heidegger was somewhat obsessed with language and believed that language tends, over time, to hide and obfuscate meaning, when it should rather shed light on things.  Along this vein,  Being and Time begins with the claim that we today no longer understand the meaning of Being, and that this forgetting is so thorough that we are not even any longer aware of this absence of understanding, so that even the question “What Is Being?”, which should be the most important question for us, is for the most part ignored and overlooked.  To even begin understanding Being, then, we must first try to understand the meaning of the question of Being.  We must first come to the realization that there is even a problem there in the first place which needs to be resolved.  Heidegger’s chosen solution to this problem involves the claim that while language conceals meaning, it also, in its origins, is able to reveal it if we are able to come to understand language correctly.  He gives an example with the term aletheia, which in Greek means truth.  By getting to the origins of language and the experience of language, we can reveal aletheia. Aletheia, etymologically, means not-forgetting (thus the river Lethe is, in Greek mythology, the river of forgetting that the dead must cross before resting in Hades), and so the truth is implicitly an unconcealment that recovers the meanings implicit in language.  The authentic meaning of jargon, Fowler similarly claims, can be arrived at if we remove the accretions caused by “semantic diffusion” and get back to the original intent.


But is this true?  Do apologetics for terms such as “Web 2.0” and “Agile” insisting that they are “jargon” ultimately succeed?  Do such attempts reveal the original intent implicit in the coining of these terms or do they simply conceal the original meanings even further?


My personal opinion is that jargon, by its nature, never really reveals, but always in one way or another, by condensing thought and providing a shorthand for ideas, conceals.  It can of course be useful, but it can never be instructive, but rather gives us a sense that we understand things we do not understand simply because we know how to use a given jargon.  At best, jargon can be used as an indicator that points to a complex of ideas shared by a given community.  At worse, it is used as shorthand for bad or incoherent ideas that never themselves get critical treatment because the jargon takes the place of ideas, and becomes mistaken for ideas.  This particularly seems to be the case with the defense of “Web 2.0” and “Agile” as “jargon”, as if people have a problem with the terms themselves rather than what they stand for.  “Jargon”, as a technical term, is not particularly useful.  It is to some extent already corrupt from the get-go.


One way around this might be to simply stop using the term “jargon”, whether bad or good, when discussing things like Web 2.0 and Agile.  While it is common in English to use Latin derived terms for technical language and Anglo-Saxon words for common discourse, in this case we might be obliged to make the reverse movement as we look for an adequate replacement term for “jargon”.


In 2005, the Princeton philosopher Harry Frankfurt published a popular pamphlet called On Bullshit that attempts to give a philosophical explanation of the term. On first blush, this title may seem somewhat prejudicial, but I think that, as with jargon, if we get away from pre-conceived notions as to whether the term is good or bad, it will be useful as a way to get a fresh look at the term we are currently trying to evaluate, “Web 2.0”.  It can also be used most effectively if we do the opposite of what we did with “jargon”; jargon was first taken to appropriately describe the term “Web 2.0”, and then an attempt was made to understand what jargon actually was.  In this case, I want to first try to understand what bullshit is, and then see if it applies to “Web 2.0”.


Frankfurt begins his analysis with a brief survey of the literature on bullshit, which includes Max Black’s study of “humbug” and Augustine of Hippo’s analysis of lying.  From these, he concludes that bullshit and lying are different things, and as a preliminary conclusion, that bullshit falls just short of lying.  Moreover, he points out that it is all pervasive in a way that lying could never be.



The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.


Not satisfied with this preliminary explanation, however, Frankfurt identifies further elements that characterize bullshit, since there are many things that can fall short of a lie and yet, perhaps, not rise to the level of bullshit.  He then identifies inauthenticity as the hallmark that distinguishes bullshit from lies, on the one hand, and simple errors of fact, on the other.



For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.


 


It is not what a bullshitter says, then, that marks him as a bullshitter, but rather his state-of-mind when he says it.  For Frankfurt, bullshit doesn’t really even belong on the same continuum with truth and falsehood, but is rather opposed to both.  Like the Third Host in Dante’s Inferno, it is indifference to the struggle that ultimately identifies and marks out the class of bullshitters.


Again, there are echoes of Heidegger here.  According to Heidegger, we are all characterized by this “thrownness”, which is the essence of our “Being-In-The-World”.  In our thrownness, we do not recognize ourselves as ourselves, but rather as das Man, or as the they-self,



which we distinguish from the authentic Self – that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself.” And further “If Dasein discovers the world in its own way [eigens] and brings it close, if it discloses to itself its own authentic Being, then this discovery of the ‘world’ and this disclosure of Dasein are always accomplished as a clearing-away of concealments and obscurities, as a breaking up of the disguises with which Dasein bars its own way.


The main difference between Frankfurt’s and Heidegger’s analysis of authenticity, in this case, is that Frankfurt seems to take authenticity as normative, whereas Heidegger considers authenticity as the zero-point state of man when we are first thrown into the world.


For now, however, the difference isn’t all that important.  What is important is Frankfurt’s conclusion about the sources of bullshit.  At the end of his essay, Frankfurt in effect writes that there are two kinds of bullshit, one of which is defensible and one of which is not.  The indefensible kind of bullshit is based on a subjectivist view of the world which denies truth and falsity altogether (and here I take Frankfurt to be making a not too veiled attack on the relativistic philosophical disciplines that are based on Heidegger’s work).  The defensible form of bullshit — I hesitate to call it good bullshit — is grounded in the character of our work lives, which force us to work with and represent information that is by its nature too complex for us to digest and promulgate accurately.  This, I take it, is the circumstance academic lecturers and others frequently find themselves in, as the stand behind the podium and are obliged to talk authoritatively about subjects they do not feel up to giving a thorough, much less an authentic, account of.



Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.


 


This class of speech is the result of our inability to apply Wittgenstein’s dictum, “Whereof one cannot speak, thereof one must be silent.”  There are times when we are not in a position to remain silent, and so are obligated to bullshit.  Bullshit, in these cases, is a way of making the best of our situation.


Per the original arrangement, it is now time to put  “bullshit” to the test and see if either cynical bullshit or benign bullshit can be ascribed to the term “Web 2.0”.  For better or worse, I am going to use Jeffrey Zeldman’s blog on Web 2.0 (titled, confusingly enough, “Web 3.0”) as the main text for this analysis.  Zeldman is generally sympathetic to the ideas and phenomena the “Web 2.0” is meant to encompass, but he also points out the aspects of the term that grate.  The most salient is the degree to which it smells like a sales pitch.


 



It soon appeared that “Web 2.0” was not only bigger than the Apocalypse but also more profitable. Profitable, that is, for investors like the speaker. Yet the new gold rush must not be confused with the dot-com bubble of the 1990s:

“Web 1.0 was not disruptive. You understand? Web 2.0 is totally disruptive. You know what XML is? You’ve heard about well-formedness? Okay. So anyway—”

And on it ran, like a dentist’s drill in the Gulag.


Zeldman associates Web 2.0 with marketing, which Frankfurt in turn associates with bullshit.  Frankfurt even goes so far as identifying sales and its related disciplines as “the most indisputable and classic paradigms of the concept.”  Moreover, the defense that Web 2.0 describes a real phenomenon, as Fowler insists and Zeldman grants, doesn’t make it not bullshit, since Frankfurt concedes that bullshit can just as well be true as false.  What is important is the authenticity or inauthenticity of the original claim, and the sense that something is a sales pitch is already an indication that something inauthentic is going on.  So “Web 2.0” certainly meets Frankfurt’s criteria for bullshit.


The more important question is what kind of bullshit is it?  Is it benign, or cynical?  According to Frankfurt’s distinction, again, the difference is whether the bullshit is grounded in the nature of one’s work or rather in some sort of defect of epistemic character.


Here the answer is not so simple, I think, since software has two strands, one going to the hobbyist roots of programming, and the other to the monetizing potential of information technology.  Moreover, both strands tend to struggle within the heart of the software engineering industry, with the open source movement on the one hand (often cited as one key aspect of Web 2.0) emblematic of the purist strain, and the advertising prospects on the other (with Google in the vanguard, often cited as a key exemplar of the Web 2.0 phenomena) symbolic of the notion that a good idea isn’t enough — one also has to be able to sell one’s ideas.


Software programming, in its origins, is a discipline practiced by nerds.  In other words, it is esoteric knowledge, extremely powerful, practiced by a few and generally misunderstood by the majority of people.  As long as there is no desire to explain the discipline to outsiders, there is no problem with treating software programming as a hobby.  At some point, however, every nerd wants to be appreciated by people who are not his peers, and to accomplish this, he is forced to explain himself and ultimately to sell himself.  The turning point for this event is well documented, and occurred on February 3rd, 1973, when Bill Gates wrote an open letter to the hobbyist community stating that software had economic value and that it was time for people to start paying for it. 


This was a moment of triumph for nerds everywhere, though this was not at first understood, and still generates resentment to this day, because it irrevocably transformed the nature of software programming.  Once software was recognized as something of economic value, it also became clear that software concepts now had to be marketed.  The people who buy software are typically unable to distinguish good software from bad software, and so it becomes the responsibility of those who can to try to explain why their software is better in terms that are not, essentially, technical.  Instead, a hybrid jargon-ridden set of terms had to be created in order to bridge the gap between software and the business appetite for software.  Software engineers, in turn, learned to see selling themselves, to consumers, to their managers, and finally to their peers, as part of the job of software engineering — though at the same time, this forced obligation to sell themselves continues to be regarded with with suspicion and resentment.  The hope held out to such people is that through software they will eventually be able to make enough money, as Bill Gates did, as Steve Jobs did, to finally give up the necessity of selling themselves and return to a pure hobbyist state-of-mind once again.  They in effect want to be both the virgin and the whore.  This is, of course, a pipe dream.


Consequently, trying to determine whether Web 2.0 is benign bullshit or cynical bullshit is difficult, since sales both is and is not an authentic aspect of the work of software engineering.  What seems to be the case is that Web 2.0 is a hybrid of benign and cynical bullshit.  This schizophrenic character is captured in the notion of Web 2.0 itself, which is at the same time a sales pitch as well as an umbrella term for a set of contemporary cultural phenomena.


Now that we know what bullshit is, and we know that Web 2.0 is bullshit, it is time to evaluate what Web 2.0 is.  In Tim O’Reilly’s original article that introduced the notion of Web 2.0, called appropriately What Is Web 2.0, O’Reilly suggests several key points that he sees as typical of the sorts of things going on over the past year at companies such as Google, Flikr, YouTube and Wikipedia.  These observations include such slogans as “Harnessing Collective Intelligence”, “Data is the Next Intel Inside” and “End of the Software Release Cycle”.  But it is worth asking if these really tell us what Web 2.0 is, or if they are simply ad hoc attempts to give examples of what O’Reilly says is a common phenomenon?  When one asks for the meaning of terms such as Web 2.0, what one really wants is the original purpose behind coining the term.  What is implicit in the term Web 2.0, as Heidegger would put it, that at the same time is concealed by the language typically used to explain Web 2.0.


As Zeldman points out, one key (and I think the main key) to understanding Web 2.0 is that it isn’t Web 1.0.  The rise of the web was marked by a rise in bluff and marketing that created what we now look back on as the Internet Bubble.  The Internet Bubble, in turn, was a lot of marketing hype and the most remarkable stream of jargon used to build up a technology that, in the end, could not sustain the amount of expectation with which it was overloaded.  By 2005, this bad reputation that had accrued to the Web from the earlier mistakes had generated a cynicism about the new things coming along that really were worthwhile — such as the blogging phenomenon, Ajax, Wikipedia, Google, Flikr and YouTube.  In order to overcome the cynicism, O’Reilly coined a term that, successfully, distracted people from the earlier debacle and helped to make the Internet a place to invest money once again.  Tim O’Reilly, even if his term is bullshit, as we have already demonstrated above, ultimately has done us all a service by clearing out all the previous bullshit.  In a very Heideggerian manner, he made a clearing [Lichtung] for the truth to appear.  He created, much as Heidegger attempted to do for Being, a conceptual space for new ideas about the Internet to make themselves apparent.


Or perhaps my analogy is overwrought. In any case, the question still remains as to what one does with terms that have outlived their usefulness.  In his introduction to Existentialism and Human Emotions, Jean-Paul Sartre describes the status the term “existentialism” has achieved by 1957.



Someone recently told me of a lady who, when she let slip a vulgar word in a moment of irritation, excused herself by saying, “I guess I’m becoming an existentialist.”



Most people who use the word would be rather embarrassed if they had to explain it, since, now that the word is all the rage, even the work of a musician or painter is being called existentialist.  A gossip columnist in Clartes signs himself The Existentialist, so that by this time the word has been so stretched and has taken on so broad a meaning, that it no longer means anything at all.


 


Sartre spent most of the rest of his philosophical career refining and defending the term “existentialism,” until finally it was superceded by post-structuralism in France. The term enjoyed a second life in America, until post-structuralism finally made the Atlantic crossing and superceded it there, also, only in turn to be first treated with skepticism, then with hostility, and finally as mere jargon.  It is only over time that an intellectual clearing can be made to re-examine these concepts.  In the meantime, taking a cue from Wittgenstein, we are obliged to remain silent over them.

Long Dark Night of the Compiler


In his book on the development the C++ language, The Design and Evolution of C++, Bjarne Stroustrup says that in creating C++ he was influenced by the writings of Søren Kierkegaard.  He goes into some detail about it in this recent interview:


 



A lot of thinking about software development is focused on the group, the team, the company. This is often done to the point where the individual is completely submerged in corporate “culture” with no outlet for unique talents and skills. Corporate practices can be directly hostile to individuals with exceptional skills and initiative in technical matters. I consider such management of technical people cruel and wasteful. Kierkegaard was a strong proponent for the individual against “the crowd” and has some serious discussion of the importance of aesthetics and ethical behavior. I couldn’t point to a specific language feature and say, “See, there’s the influence of the nineteenth-century philosopher,” but he is one of the roots of my reluctance to eliminate “expert level” features, to abolish “misuses,” and to limit features to support only uses that I know to be useful. I’m not particularly fond of Kierkegaard’s religious philosophy, though.


 


Stroustrup is likely referring to philosophical observations such as this:


 



Truth always rests with the minority, and the minority is always stronger than the majority, because the minority is generally formed by those who really have an opinion, while the strength of a majority is illusory, formed by the gangs who have no opinion–and who, therefore, in the next instant (when it is evident that the minority is the stronger) assume its opinion . . . while Truth again reverts to a new minority.

— Søren Kierkegaard

 


Coincidentally, Kierkegaard and Pascal are often cited as the fathers of modern existentialism, and where Kierkegaard appears to have influenced the development of C++, Pascal’s name lives on in the Pascal programming language as well as the Pascal case, used as a stylistic device in most modern languages.  The Pascal language, in turn, was contemporary with the C language, which was the syntactic precursor to C++.


So just as the Catholic Church holds that guardian angels guide and watch over individuals, cities and nations, might it not also be the case that specific philosophers watch over different programming languages?  Perhaps a pragmatic philosopher like C. S. Peirce would watch over Visual Basic.  A philosopher fond of architectonics, like Kant, would watch over Eiffel.  John Dewey could watch over Java, while Hegel, naturally, would watch over Ruby.