Speaking at DevLink

Our far-flung correspondent from self-promotion land writes:

I received an invitation this past week to speak at DevLink.  I will be presenting on two topics:

The C# 4 Dynamic Objects session will be a longer version of the talk I gave at the MVP Summit in February.  The Advanced Windows Phone 7 talk is one I find I am updating every few weeks as more tools and information about the platform become available.

DevLink is a three day conference being held in Nashville, August 5-7.

My Dance Card is Full

dancecard

I’m overcommitted.  I recently changed jobs, moving from Magenic Technologies, a software consulting firm, to Sagepath, a creative agency.  I knew the new job would be great when they paid my way to MIX10 my second week at work.

On top of that I submitted talks to several regional conferences and the voting has begun to select speakers.

For CodeStock I have submitted the following three presentations:

For the DEVLINK conference I also have three talks which are currently doing well in community voting.  This is a blind vote, though, so I won’t list the sessions I have submitted.

With Ambrose Little and Brady Bowman, I have started a reading group on Umberto Eco’s A Theory of Semiotics called ‘Semiotics and Technology’ and hosted on google groups.  We have just finished the first chapter and are beginning our discussion of the Introduction.

In addition, I have been working with several members of the Atlanta developer community to organize ReMIX Atlanta.  The goal of ReMIX is to have a different kind of conference – one that caters to both the developer community and the design-UX community.  These are traditionally two communities that do not necessarily get along – and yet they must if we are ever to get beyond the current proliferation of unusable applications and non-functional websites.  To quote Immanuel Kant:

“Thoughts without content are empty, intuitions without concepts are blind.”

I also need to stain the deck and build a website for my wife – and in the time left over I plan to build the next killer application for the Windows Phone 7 Series app store.  This time next year I will be blogging to you from a beach in Tahiti while reclining on a chair made from my Windows Phone app riches.

In the meantime, however, my dance card is full.

Open Spaces and the Public Sphere

office

While watching C-SPAN’s coverage of the public Congressional debate over healthcare (very entertaining if not instructive), I found that a friend has been writing about Habermas’s concept of the ‘public sphere’: Slawkenbergius’s Tales.

To be more precise, he elucidates on the mistranslation of ‘Öffentlichkeit’ in the 1962 work Habilitationsschrift, Strukturwandel der Öffentlichkeit into English as ‘public sphere’.  The term, as used by Heidegger, was often translated into English as ‘publicity’ or ‘publicness’.

“The actually important text was Thomas Burger’s 1989 translation of the book, The Structural Transformation of the Public Sphere. (As my advisor pointed out to me, the highly misleading rendering of the abstract noun Öffentlichkeit as the slightly less abstract but more "spatialized" public sphere may have been the source of all the present trouble.) Because the translation arrived at a point in time when issues of space, popular culture, material culture, and print media were at the forefront of historiographical innovation, the spatialized rendering fit very nicely with just about everyone’s research project. That’s when the hegemony of the "public sphere" began. ”

The mention of ‘space’ in Slawkenbergius’s discourse (he’ll wince at that word – read his blog entry to find out why) reminded me of the origins of the Open Space movement.

For those unfamiliar with it, open spaces are a way of loosely organizing meetings that is currently popular at software conferences and user groups.  The tenets of an open space follows, vis-a-vis Wikipedia:

  • Whoever comes is the right people [sic]: this alerts the participants that attendees of a session class as "right" simply because they care to attend
  • Whatever happens is the only thing that could have: this tells the attendees to pay attention to events of the moment, instead of worrying about what could possibly happen
  • Whenever it starts is the right time: clarifies the lack of any given schedule or structure and emphasizes creativity and innovation
  • When it’s over, it’s over: encourages the participants not to waste time, but to move on to something else when the fruitful discussion ends
  • The Law of Two Feet: If at any time during our time together you find yourself in any situation where you are neither learning nor contributing, use your two feet. Go to some other place where you may learn and contribute.
  • The open spaces I’ve encountered at software conferences have tended to be one man on a soapbox or several men staring at their navels.  But I digress.

    The notion of an Open Space was formulated by Harrison Owen in the early 1980’s (about the same time Habermas was achieving recognition in America) as a way to recreate the water-cooler conversation.  It is intended to be a space where people come without agendas simply to talk.  The goal of an open space, in its barest form, is to create an atmosphere where ‘talk’, of whatever sort, is generated.

    For me, this has an affinity with Jürgen Habermas’s notion of an ‘Ideal Speech Situation’, which is an idealized community where everyone comes together in a democratic manner and simply talks in order to come to agreement by consensus about the ‘Truth’ – with the postmodern correction that ‘Truth’ is not a metaphysical concept but merely this – a consensus.

    This should come with a warning, however, since in Heidegger’s use of Öffentlichkeit, public sphere | open space | publicness is not a good thing.  Publicness is characteristic of a false way of being that turns each of us (Dasein) into a sort of ‘they’ (Das Man) – the ‘they’ we talk about when we say “they say x” or “they are doing it this way this year.”  According to Heidegger – and take this with a grain of salt since he was a Nazi for a time, after all, and was himself rather untrustworthy – in Being and Time:

    The ‘they’ has its own ways in which to be …

    Thus the particular Dasein in its everydayness is disburdened by the ‘they’.  Not only that; by thus disburdening it of its Being, the ‘they’ accommodates Dasein … if Dasein has any tendency to take things easily and make them easy.  And because the ‘they’ constantly accommodates the particular Dasein by disburdening it of its Being, the ‘they’ retains and enhances its stubborn dominion.

    Everyone is the other, and no one is himself.

    October 2009: The Month that Was

    WorldOfTomorrow

    At the Atlanta Leading Edge Microsoft User Group (ALEMUG), we typically set aside some time at the beginning of each meeting to discuss the hot topics related to software development – with a particular slant toward the Microsoft world – that have come up in the previous month.

    The web of cross-conversations on blogs, YouTube videos, and software announcements makes up and propels the culture of the software industry.  To be a software developer, in some degree, means being current on these ephemeral Internet happenings.  The purpose of the ten minutes we set aside at the ALEMUG meetings to discuss them is simply to make sure everyone is caught up on current events, so to speak, so that we have a common vocabulary when discussing technology and software methodologies.  After all, communication is the most difficult thing about developing software.  Many of us know how to get things done, but the hard part – explaining why we do things the way we do and sharing our technical knowledge with others – is elusive.  Programming knowledge is always fragmentary, at best, and trying to bring it all together through best practices and even some historical perspective is a constant struggle.

    These monthly wrap-ups also serve as a time capsule, however.  A peculiarity of working on the cutting edge of technology is that there is very little awareness of the passing of time.  Software development usually occurs in a bubble of hyper-focus that inevitably destroys our sense of time.  For instance, how long has WPF been around?  How long has twitter been around?  On a resume, what is the longest amount of time a developer can legitimately claim to have worked with .NET? 

    With the goal of restoring the sense of the flow of time – what Kant called inner sense – here is a list of matters momentous and trivial to the software industry in the middling period between September and October, 2009:

    A renewed debate between Morts and Architect Astronauts was started by Joel Sposky:

    http://www.joelonsoftware.com/items/2009/09/23.html

    http://jeffreypalermo.com/blog/debunking-the-duct-tape-programmer/

    http://martinfowler.com/bliki/TechnicalDebtQuadrant.html

    This was mirrored by a similar sort of debate concerning software methodologies started by Ted Neward:

    http://blogs.tedneward.com/2009/10/12/quotAgile+Is+Treating+The+Symptoms+Not+The+Diseasequot.aspx

    http://haacked.com/archive/2009/10/13/software-externalities.aspx

    Microsoft started a new series of ads for their operating systems: Win 7 and Mobile 6.5, that did not quite hit their mark:

    http://www.bing.com/videos/search?q=microsoft+launch+party&docid=1316730503766&mid=893DA2AD882B75E7525B893DA2AD882B75E7525B&FORM=VIVR7#

    http://www.youtube.com/watch?v=mUotyelWmFE&feature=player_embedded

    As the Gartner group weighed in on Win 7:

    http://blogs.zdnet.com/microsoft/?p=4227

    In hardware, solid state drives got the seal of approval from Jeff Atwood while Barnes & Noble finally came out with their alternative to Amazon’s Kindle:

    http://www.codinghorror.com/blog/archives/001304.html

    http://gizmodo.com/5380942/barnes-and-nobles-e+reader-like-a-kindleiphone-chimera-first-photos-and-details

    Interesting new software and services were released, including a tool for writing iPhone apps using C#, Google Wave (does anyone have an invitation they can send me?), and Yahoo’s alternative to Twitter:

    http://monotouch.net/

    http://wave.google.com/help/wave/closed.html

    http://meme.yahoo.com/home/

    An indication that the cold war between Microsoft and Google is beginning to heat up:

    http://msmvps.com/blogs/jon_skeet/archive/2009/10/01/mvp-no-more.aspx

    And some insights into the world of publishing software books:

    http://beginningruby.org/what-ive-earned-and-learned/

    All rational people will agree …

    squareOfOposition

    In the Critique of Pure Reason, Immanuel Kant argues that moral actions are the actions of an autonomous will, and that an autonomous will is a will that is determined by reason alone.  Moral actions, additionally, follow the form of being universalizable, hence Kant finds in rationality a confirmation of the Golden Rule: when you do unto others as you would have them do unto you, you are behaving in a manner that can be universalized for all mankind.

    In the years following Kant’s death, the notion that all rational people, when they are being rational, will consistently agree with certain propositions has fallen somewhat into disfavor.  This has reached the point that Habermas, perhaps one of the last post-modern defenders of some form of Kantian morality, argues that while it was presumptuous for Kant to make claims about what rationality looks like, we should still work towards some sort of ideal speech community which removes all biases and personal ambitions from the process of communication; this would de facto be the closest thing we can get to our ideal of pure reason, at which point we let them tell us what all rational people ought to believe.

    And so you will not often see the phrase ‘all rational people agree’ in print much, these days, though back-in-the-day it was a commonplace.  Still, old habits die hard, and in its place we often find the phrase ‘most people agree’ in its place, with the implication that the “most people” we are talking about are the “rational people.”

    I was recently trying to find a good article on Dependency Injection for a colleague and came across this one on MSDN, which unfortunately begins:

    “Few would disagree that striving for a loosely coupled design is a bad thing.”

    I assume that the author originally intended to say “Most would agree that striving for a loosely coupled design is a good thing.”   He then attempted to negate the statement for emphasis, but managed to over-negate, and failed to see that “few would disagree” is the apposite of “many would agree”, rather than its opposite.

    In fact someone mentioned this to the author on his personal blog, but the author averred that this was just a Canadianism and not actually a mistake.

    Be that as it may, it recalls a problem with logical quantifiers and natural language.  In the classic square of opposition (which sounds like a perverse political system but is simply a diagram that explains the complex relationship between universal propositions and existential ones) we are reminded that while universal affirmative propositions are contrary to universal negations, they are not contradictory.

    If an A proposition, for instance ‘All rational people agree that X’, is true, then the contrary E proposition, ‘No rational people agree that X’ must necessarily be false, and vice-versa.  However, these are not contradictory propositions since both can actually be false.

    The contradiction of A is the existential proposition (O), ‘Some rational people don’t agree that X’, just as the contradiction of E is I, ‘Some rational people agree that X.’  If A is false, then O is necessarily true; if O is false, then A is true; and either A or O must be true.

    What cannot be easily resolved into logical formalism are the grammatical quantifiers ‘many’ and ‘few’.   Are ‘many’ and ‘few’ contradictory or merely contrary (or, God forbid, even subcontrary)?

    What we can all agree on is the fact that english idioms are sometimes difficult to work with, and that this is particularly true when we attempt to formulate complex relationships between quantifiers.  Take, for instance, Abraham Lincoln’s famous epigram (which he possibly never uttered):

    You can fool all of the people some of the time, and you can fool some of the people all of the time, but you can’t fool all of the people all the time.

    I actually have trouble understanding the logic of this argument, and take exception with the first and second premises for the same reason that I take exception with statements about what all rational people agree on.  However I find myself so in agreement with the conclusion, as most people do, that I tend to overlook the manner in which we arrive at it.

    Or take this one, from Tolkien’s novel:

    I don’t know half of you half as well as I should like; and I like less than half of you half as well as you deserve.

    It simply cannot be formalized into first-order logic.  Yet, remarkably, we manage to understand it — or at least most of us do.

    In the movie Shrek the Third, there is this exchange, which plays on the difficulties of universal negation (O propositions brought to you by IMDB):

    Prince Charming: You! You can’t lie! So tell me puppet… where… is… Shrek?
    Pinocchio: Uh. Hmm, well, uh, I don’t know where he’s not
    Prince Charming: You’re telling me you don’t know where Shrek is?
    Pinocchio: It wouldn’t be inaccurate to assume that I couldn’t exactly not say that it is or isn’t almost partially incorrect.
    Prince Charming: So you do know where he is!
    Pinocchio: On the contrary. I’m possibly more or less not definitely rejecting the idea that in no way with any amount of uncertainty that I undeniably
    Prince Charming: Stop it!
    Pinocchio: …do or do not know where he shouldn’t probably be, if that indeed wasn’t where he isn’t. Even if he wasn’t at where I knew he was
    [Pigs and Gingerbread Man begin singing]
    Pinocchio: That’d mean I’d really have to know where he wasn’t.

    In California, for safety reasons, the following law is apparently on the books:

    The law says that nothing “shall be so designed and installed that it cannot, even in cases of failure, impede or prevent emergency use of such exit.”

    H. L. Mencken provides several canonical examples of double negation in his book The American Language in order to demonstrate that it was once more common and better accepted, especially when English was closer to it’s inflected roots.  Thus Chaucer writes in The Knight’s Tale:

    He nevere yet no vileynye ne sayde
    In al his lyf unto no maner wight.

    Shakespeare was no sloucher, neither, and Mencken cites several examples:

    In “Richard III” one finds “I never was nor never will be”; in “Measure for Measure,” “harp not on that nor do not banish treason,” and in “Romeo and Juliet,” “thou expectedst not, nor I looked not for.”

    At the same time, Shakespeare was also a genius at presenting universal negation in a manner fit to please Hegel when the Prince of Denmark soliloquizes:

    To be, or not to be

    while in King Lear,  in turn, the Bard’s use of universal affirmation is so fitting that any reasonable person would acknowledge it:

    All the world’s a stage,
    And all the men and women merely players:
    They have their exits and their entrances;

    The Quiet Mind

    IMG_1298

    Like an Empiricist prescription or an occult warning, depending on how you take it, Wittgenstein wrote as a coda to the Tractatus Logico-Philosophicus,  Wovon man nicht sprechen kann, darüber muß man schweigen.  C. K. Ogden translates this as “Whereof one cannot speak, thereof one must be silent.”

    I have spent the past week trying to learn how to be silent, again.  I unplugged myself from the Internet and went to the beach with my family where I spent several days trying to silence the various voices that constitute a perpetual background cacophony in my head.  The ocean swell helped me to accomplish this quiescence of the noetical Madding crowd, until finally there was nothing left but stillness in my brain and the inbreath and outbreath of the sea as it filled up the new tide pools of my mind, and a gasp escaped my lips and traveled over the waters as I realized that this really was a vacation, at last, and it wasn’t what I had expected.

    This sense of quietude is what I have always taken Heidegger to be referring to when he discusses Lichtung — the clearing — in his writings, for instance in Being and Time where he writes:

    “In our analysis of understanding and of the disclosedness of the “there” in general, we have alluded to the lumen naturale, and designated the disclosedness of Being-in as Dasein’s “clearing”, in which it first becomes possible to have something like sight.” — tr. Macquarrie & Robinson

    Except that for me, sight is a place holder for my inner monologue, and the clearing is a place to rediscover my inner voice.  We are all social animals, after all, and when we are with other people our inner voices become drowned out by the various social pressures that sweep us along, whether this is in politics, or at work, or on the Internet, the biggest stream of voices available.  My general strategy in life is to fill my mind with so many voices that they eventually begin to cancel each other out so that my rather weak voice can have some influence within my own head.  But this doesn’t always work, and I eventually need to detox in a quiet place.

    Heidegger uses a clearing in the woods as his metaphor for this place, but I think the ocean serves the purpose even better.  The ocean is a natural source of white noise, and the way white noise affects human beings is peculiar.  According to some studies, the appeal of white noise seems to be specific to primates, and to humans in particular.  According to the Aquatic Ape theory, human evolution is intimately tied to the coast, and this might explain, in a hand-waving sort of way, our affinity for the ocean and the sounds of the ocean.  It is the music that calms the inner beast.

    The inner monologue is a peculiar, though pervasive, phenomenon.  An interesting observation concerning it occurs in Augustine’s Confessions.  Augustine expresses amazement at the fact that his mentor, Ambrose, is able to read without moving his lips.  This gives us a strange impression of the Roman world — apparently it has not always been the case that people read silently in an ALA approved manner.  This in turn has led various philosophers to wonder if the inner monologue existed for these Romans, or if they simply articulated everything they thought.

    For Derrida, this became a motif for his philosophical studies.  In an early work, Speech and Phenomena, Derrida tries to find the source of Husserl’s phenomenological insight in The Logical Investigations, and concludes that it is due to a basic confusion between observation and speech.  Because in speech we are capable of this inner monologue, Husserl, according to Derrida, made the analogous assumption that we can exist, in some peculiar way, without the world.

    “For it is not in the sonorous substance or in the physical voice, in the body of speech in the world, that he [Husserl] will recognize an original affinity with the logos in general, but in the voice phenomenologically taken, speech in its transcendental flesh, in the breath, the intentional animation that transforms the body of the word into flesh, makes of the Körper a Leib, a geistige Leiblichkeit.” — tr. David B. Allision

    Just as Derrida saw in the Logical Investigations the germ of the entire Husserlian project, the Husserlian David Carr used to tell us that the germ of Derrida’s project could be found in this brief passage.  Taking the problem of authorial intent to a philosophical level, Derrida wants to cast doubt on the meaning of the inner voice, and it’s privileged status as the arbiter of the meaning of its utterances.  It is a sort of Neo-Empirical game that resembles the attacks often made by material-reductionists on the folk-psychology of consciousness, which has at its core the notion that for the most part we all know what we are talking about when we talk about something.  Instead, the inner voice is a sort of illusion to be dispelled, like witchcraft and theology.

    And yet I can’t help but feel that there is something to the inner voice.  For instance what was George Bush thinking about when he was first notified about the 911 attacks?  What was Bill Clinton thinking and intending in that fateful pause between the phrases “woman” and “Monica Lewinsky” that changed the meaning of this statement (skip to end for the good bit)? 

    Silence isn’t simply a turning off of the mind.  When the lights go out, we may stop seeing things, but when all noise is shut out, we continue to hear ourselves, and it is perhaps the best time to hear ourselves and in the act recollect ourselves.

    I leave you with John Cage’s composition 3′ 44”, which is pregnant with the composer’s intent, as well as the performer’s in this unique rendition.

    Authority as Anti-Pattern

    authority

    There has been a recent spate of posts about authority in the world of software development, with some prominent software bloggers denying that they are authorities.  They prefer to be thought of as intense amateurs.

    I worked backwards to this problematic of authority starting with Jesse Liberty.  Liberty writes reference books on C# and ASP.NET, so he must be an authority, right?  And if he’s not an authority, why should I read his books?  This led to  Scott Hanselman, to Alastair Rankine and finally to Jeff Atwood at CodingHorror.com.

    The story, so far, goes like this.  Alastair Rankine posts that Jeff Atwood has jumped the shark on his blog by setting himself up as some sort of authority.  Atwood denies that he is any sort of authority, and tries to cling to his amateur status like a Soviet-era Olympic poll vaulter.  Scott Hanselman chimes in to insist that he is also merely an amateur, and Jesse Liberty (who is currently repackaging himself from being a C# guru to a Silverlight guru) does an h/t to Hanselman’s post.  Hanselman also channels Martin Fowler, saying that he is sure Fowler would also claim amateur status.

    Why all this suspicion of authority?

    The plot thickens, since Jeff Atwood’s apologia, upon being accused by Rankine of acting like an authority, is that indeed he is merely "acting". 

    "It troubles me greatly to hear that people see me as an expert or an authority…

    "I suppose it’s also an issue of personal style. To me, writing without a strong voice, writing filled with second guessing and disclaimers, is tedious and difficult to slog through. I go out of my way to write in a strong voice because it’s more effective. But whenever I post in a strong voice, it is also an implied invitation to a discussion, a discussion where I often change my opinion and invariably learn a great deal about the topic at hand. I believe in the principle of strong opinions, weakly held…"

    To sum up, Atwood isn’t a real authority, but he plays one on the Internet.

    Here’s the flip side to all of this.  Liberty, Hanselman, Atwood, Fowler, et. al. have made great contributions to software programming.  They write good stuff, not only in the sense of being entertaining, but also in the sense that they shape the software development "community" and how software developers — from architects down to lowly code monkeys — think about coding and think about the correct way to code.  In any other profession, this is the very definition of "authority".

    In literary theory, this is known as authorial angst.  It occurs when an author doesn’t believe in his own project.  He does what he can, and throws it out to the world.  If his work achieves success, he is glad for it, but takes it as a chance windfall, rather than any sort of validation of his own talents.  Ultimately, success is a bit perplexing, since there are so many better authors who never achieved success in their own times, like Celine or Melville.

    One of my favorite examples of this occurs early in Jean-Francois Lyotard’s The Postmodern Condition in which he writes that he knows the book will be very successful, if only because of the title and his reputation, but …  The most famous declaration of authorial angst is found in Mark Twain’s notice inserted into The Adventures of Huckleberry Finn:

    "Persons attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot."

    In Jeff Atwood’s case, the authority angst seems to take the following form: Jeff may talk like an authority, and you may take him for an authority, but he does not consider himself one.  If treating him like an authority helps you, then that’s all well and good.  And if it raises money for him, then that’s all well and good, too.  But don’t use his perceived authority as a way to impugn his character or to discredit him.  He never claimed to be one.  Other people are doing that.

    [The French existentialists are responsible for translating Heidegger’s term angst as ennui, by the way, which has a rather different connotation (N is for Neville who died of ennui).  In a French translation class I took in college, we were obliged to try to translate ennui, which I did rather imprecisely as "boredom".  A fellow student translated it as "angst", for which the seminar tutor accused her of tossing the task of translation over the Maginot line.  We finally determined that the term is untranslatable.  Good times.]

    The problem these authorities have with authority may be due to the fact that authority is a role.  In Alasdaire MacIntyre’s After Virtue, a powerful critique of what he considers to be the predominant ethical philosophy of modern times, Emotivism, MacIntyre argues that the main characteristics (in Shaftesbury’s sense) of modernity are the Aesthete, the Manager and the Therapist.  The aesthete replaces morals as an end with a love of patterns as an end.  The manager eschews morals for competence.  The therapist overcomes morals by validating our choices, whatever they may be.  These characters are made possible by the notion of expertise, which MacIntyre claims is a relatively modern invention.

    "Private corporations similarly justify their activities by referring to their possession of similar resources of competence.  Expertise becomes a commodity for which rival state agencies and rival private corporations compete.  Civil servants and managers alike justify themselves and their claims to authority, power and money by invoking their own competence as scientific managers of social change.  Thus there emerges an ideology which finds its classical form of expression in a pre-existing sociological theory, Weber’s theory of bureaucracy."

    To become an authority, one must begin behaving like an authority.  Some tech authors such as Jeffrey Richter and Juval Lowy actually do this very well.  But sacrifices have to be made in order to be an authority, and it may be that this is what the anti-authoritarians of the tech world are rebelling against.  When one becomes an authority, one must begin to behave differently.  One is expected to have a certain realm of competence, and when one acts authoritatively, one imparts this sense of confidence to others: to developers, as well as the managers who must oversee developers and justify their activities to upper management.

    Upper management is already always a bit suspicious of the software craft.  They tolerate certain behaviors in their IT staff based on the assumption that they can get things done, and every time a software project fails, they justifiably feel like they are being hoodwinked.  How would they feel about this trust relationship if they found out that one of the figures their developers are holding up as an authority figure is writing this:

    "None of us (in software) really knows what we’re doing. Buildings have been built for thousands of years and software has been an art/science for um, significantly less (yes, math has been around longer, but you know.) We just know what’s worked for us in the past."

    This resistance to putting on the role of authority is understandable.  Once one puts on the hoary robes required of an authority figure, one can no longer be oneself anymore, or at least not the self one was before.  Patrick O’Brien describes this emotion perfectly as he has Jack Aubrey take command of his first ship in Master and Commander.

    "As he rowed back to the shore, pulled by his own boat’s crew in white duck and straw hats with Sophie embroidered on the ribbon, a solemn midshipman silent beside him in the sternsheets, he realized the nature of this feeling.  He was no longer one of ‘us’: he was ‘they’.  Indeed, he was the immediately-present incarnation of ‘them’.  In his tour of the brig he had been surrounded with deference — a respect different in kind from that accorded to a lieutenant, different in kind from that accorded to a fellow human being: it had surrounded him like a glass bell, quite shutting him off from the ship’s company; and on his leaving the Sophie had let out a quiet sigh of relief, the sigh he knew so well: ‘Jehovah is no longer with us.’

    "It is the price that has to be paid,’ he reflected."

    It is the price to be paid not only in the Royal Navy during the age of wood and canvas, but also in established modern professions such as architecture and medicine.  All doctors wince at recalling the first time they were called "doctor" while they interned.  They do not feel they have the right to wear the title, much less be consulted over a patient’s welfare.  They feel intensely that this is a bit of a sham, and the feeling never completely leaves them.  Throughout their careers, they are asked to make judgments that affect the health, and often even the lives, of their patients — all the time knowing that their’s is a human profession, and that mistakes get made.  Every doctor bears the burden of eventually killing a patient due to a bad diagnosis or a bad prescription or simply through lack of judgment.  Yet bear it they must, because gaining the confidence of the patient is also essential to the patient’s welfare, and the world would likely be a sorrier place if people didn’t trust doctors.

    So here’s one possible analysis: the authorities of the software engineering profession need to man up and simply be authorities.  Of course there is bad faith involved in doing so.  Of course there will be criticism that they frauds.  Of course they will be obliged to give up some of the ways they relate to fellow developers once they do so.  This is true in every profession.  At the same time every profession needs its authorities.  Authority holds a profession together, and it is what distinguishes a profession from mere labor.  The gravitational center of any profession is the notion that there are ways things are done, and there are people who know what those ways are.  Without this perception, any profession will fall apart, and we will indeed be merely playaz taking advantage of middle management and making promises we cannot fulfill.  Expertise, ironically, explains and justifies our failures, because we are able to interpret failure as a lack of this expertise.  We then drive ourselves to be better.  Without the perception that there are authorities out there, muddling and mediocrity become the norm, and we begin to believe that not only can we not do better, but we aren’t even expected to.

    This is a traditionalist analysis.  I have another possibility, however, which can only be confirmed through the passage of time.  Perhaps the anti-authoritarian impulse of these crypto-authorities is a revolutionary legacy of the soixantes-huitards.  From Guy Sorman’s essay about May ’68, whose fortieth anniversary passed unnoticed:

    "What did it mean to be 20 in May ’68? First and foremost, it meant rejecting all forms of authority—teachers, parents, bosses, those who governed, the older generation. Apart from a few personal targets—General Charles de Gaulle and the pope—we directed our recriminations against the abstract principle of authority and those who legitimized it. Political parties, the state (personified by the grandfatherly figure of de Gaulle), the army, the unions, the church, the university: all were put in the dock."

    Just because things have been done one way in the past doesn’t mean this is the only way.  Just because authority and professionalism are intertwined in every other profession, and perhaps can longer be unraveled at this point, doesn’t mean we can’t try to do things differently in a young profession like software engineering.  Is it possible to build a profession around a sense of community, rather than the restraint of authority?

    I once read a book of anecdotes about the 60’s, one of which recounts a dispute between two groups of people in the inner city.  The argument is about to come to blows when someone suggests calling the police.  This sobers everyone up, and with cries of "No pigs, no pigs" the disputants resolve their differences amicably.  The spirit that inspired this scene, this spirit of authority as anti-pattern, is no longer so ubiquitous, and one cannot really imagine civil disputes being resolved in such a way anymore.  Still, the notion of a community without authority figures is a seductive one, and it may even be doable within a well-educated community such as the web-based world of software developers.  Perhaps it is worth trying.  The only thing that concerns me is how we are to maintain the confidence of management as we run our social experiment.