I Can Haz the Unconscious?

pet therapy

The scientific method is one of the great wonders of deliberative thought.  It isn’t just our miraculous modern world that is built upon it, but also our confidence in rationality in general.  It is for this reason that we are offended on a visceral level at all sorts of climate change deniers, creationists, birthers, conspiracy theorists and the constant string of yahoos that seem to pop up using the trappings of rationality to deny the results of the scientific method and basic common sense.

It is so much worse, however, when the challenge to the scientific method comes from within.  Dr. Yoshitaka Fujii has been unmasked as perhaps one of the greatest purveyors of made up data in scientific experimentation, and while the peer review process seems to have finally caught him out, he still had a nearly 20 year run and some 200 journal articles credited to him.  Diederik Stapel is another prominent scientific fraudster whose activities put run-of-the-mill journalistic fraudsters like Jayson Blair to shame. 

Need we even bring up the demotion of Pluto, the proposed removal of narcissistic personality disorder as a diagnosis in the DSM V (narcissists were sure this was an intentional slight against them in particular), or the little-known difficulty of predicting Italian earthquakes (seven members of the National Commission for the Forecast and Prevention of Major Risks in Italy were convicted of manslaughter for not forecasting and preventing a major seismological event)?

It’s the sort of thing that gives critics ammo when they want to discredit scientific findings like Jerry Mahlman’s hockey stick graph in climatology.  And the great tragedy isn’t that we reach a stage where we no longer believe in the scientific method, but that we now believe in any scientific method.  Everyone can choose their own scientific facts to believe in and a general opinion that incompatible scientific positions do not need to be resolved with experimentation but rather through politics prevails.

Unconscious Thought Theory is now the object of similar reconsiderations.  A Malcolm Gladwell pet theory based on the experiments of Ap Dijksterhuis, Unconscious Thought Theory posits that we simply perform certain cognitive activities better when we are not actively cognizing.  As a software programmer, I am familiar with this phenomenon in terms of “sleep coding”.  If I am working all day on a difficult problem, I will sometimes have dreams about coding in my sleep and wake up the next morning with a solution.  When I arrive back at my work, it will effectively take me a few minutes to finish typing a routine into my IDE that I’ve been working for a day or several days trying to crack. 

I am a firm believer in this phenomenon and, as they say in late night infomercials, “it really works!”  I even build a certain amount of sleep coding into my programming estimates these days.  A project may take three days of conscious effort, one night of sleep, and then an additional five minutes to code up.  Sometimes the best thing to do when a problem seems insurmountable is simply to fire up the Internets, watch some cat videos and lolcatz the unconscious.

Imagine also how salvific the notion of a powerful unconscious is following the recent series of financial crisis.  At the first level, the interpretation of financial debacles blames excessive greed for our current problems (second great depression and all that jazz).  But that’s so 1980’s Gordon Gecko.  A deeper interpretation holds that the problem comes down to falsely assuming that in economic matters we are rational actors – an observation that has given birth (or at least a second wind) to the field of behavioral economics. 

I can haz Asimo

Lots of cool counter-factual papers and books about how remarkably irrational the consumer is has come out of this movement.  The coolest has got to be not only that we are much more irrational than we think, but that our irrational unconscious selves are much more capable than our conscious selves are.  It’s a bit like the end of of Isaac Asimov’s I, Robot (spoilers ahead) where after working out all the issues with Robots someone discovers that things are just going too smoothly in the world and comes to the realization that humans are not smart enough to end wars and cure diseases like this.  After some investigation, the intrepid hero discovers that our benign computer systems have taken over the running of the world and haven’t told us because they don’t want to freak us out about it.  They want us to go on thinking that we are still in charge and to feel good about ourselves.  It’s a dis-distopian ending of sorts.

As I mentioned, however, Unconscious Thought Theory is undergoing some discreditation.  One of the rules of the scientific method is that with experiments, they gots to be reproducible, and Dijksterhuis’s do not appear to be.  Multiple experiments have not been able to replicate Dijksterhuis’s “priming effect” experiments which used social priming techniques (for instance, having something think about a professor or a football hooligan before an exam) and then evaluating the exam scores correlated with the type of priming that happened.  There’s a related social priming experiment by someone else, also not reproducible, that seemed to show that exposing people to notions about aging and old people would make them walk slower.  The failure to replicate and verify the findings of Dijksterhuis’s social priming experiments lead one inevitably to conclude that Dijksterhuis’s other experiments promoting Unconscious Thought Theory are likewise questionable.

a big friggin' eye full of clouds

On the other had, that’s exactly what a benevolent, intelligent, all-powerful, collective supra-unconscious would want us to think.  Consider that if Dijksterhuis is correct about the unconscious being, in many circumstances, basically smarter at complex thinking activities than our conscious minds are, then the last thing this unconscious would want is for us to suddenly start being conscious of it.  It works behind the scenes, after all. 

When we find the world too difficult to understand, we are expected to give up and miraculously, after a good’s night sleep, the unconscious provides us with solutions.  How many scientific eureka moments throughout history have come about this way?  How many of our greatest technological discoveries are driven by humanity’s collective unconscious working carefully and tirelessly behind the scenes while we sleep?  Who, after all, made all those cat videos to distract us from psychological experiments on the power of the unconscious while the busy work of running the world was being handled by others?  Who created YouTube to host all of those videos?  Who invented the Internet – and why? 

My Dance Card is Full

dancecard

I’m overcommitted.  I recently changed jobs, moving from Magenic Technologies, a software consulting firm, to Sagepath, a creative agency.  I knew the new job would be great when they paid my way to MIX10 my second week at work.

On top of that I submitted talks to several regional conferences and the voting has begun to select speakers.

For CodeStock I have submitted the following three presentations:

For the DEVLINK conference I also have three talks which are currently doing well in community voting.  This is a blind vote, though, so I won’t list the sessions I have submitted.

With Ambrose Little and Brady Bowman, I have started a reading group on Umberto Eco’s A Theory of Semiotics called ‘Semiotics and Technology’ and hosted on google groups.  We have just finished the first chapter and are beginning our discussion of the Introduction.

In addition, I have been working with several members of the Atlanta developer community to organize ReMIX Atlanta.  The goal of ReMIX is to have a different kind of conference – one that caters to both the developer community and the design-UX community.  These are traditionally two communities that do not necessarily get along – and yet they must if we are ever to get beyond the current proliferation of unusable applications and non-functional websites.  To quote Immanuel Kant:

“Thoughts without content are empty, intuitions without concepts are blind.”

I also need to stain the deck and build a website for my wife – and in the time left over I plan to build the next killer application for the Windows Phone 7 Series app store.  This time next year I will be blogging to you from a beach in Tahiti while reclining on a chair made from my Windows Phone app riches.

In the meantime, however, my dance card is full.

Open Spaces and the Public Sphere

office

While watching C-SPAN’s coverage of the public Congressional debate over healthcare (very entertaining if not instructive), I found that a friend has been writing about Habermas’s concept of the ‘public sphere’: Slawkenbergius’s Tales.

To be more precise, he elucidates on the mistranslation of ‘Öffentlichkeit’ in the 1962 work Habilitationsschrift, Strukturwandel der Öffentlichkeit into English as ‘public sphere’.  The term, as used by Heidegger, was often translated into English as ‘publicity’ or ‘publicness’.

“The actually important text was Thomas Burger’s 1989 translation of the book, The Structural Transformation of the Public Sphere. (As my advisor pointed out to me, the highly misleading rendering of the abstract noun Öffentlichkeit as the slightly less abstract but more "spatialized" public sphere may have been the source of all the present trouble.) Because the translation arrived at a point in time when issues of space, popular culture, material culture, and print media were at the forefront of historiographical innovation, the spatialized rendering fit very nicely with just about everyone’s research project. That’s when the hegemony of the "public sphere" began. ”

The mention of ‘space’ in Slawkenbergius’s discourse (he’ll wince at that word – read his blog entry to find out why) reminded me of the origins of the Open Space movement.

For those unfamiliar with it, open spaces are a way of loosely organizing meetings that is currently popular at software conferences and user groups.  The tenets of an open space follows, vis-a-vis Wikipedia:

  • Whoever comes is the right people [sic]: this alerts the participants that attendees of a session class as "right" simply because they care to attend
  • Whatever happens is the only thing that could have: this tells the attendees to pay attention to events of the moment, instead of worrying about what could possibly happen
  • Whenever it starts is the right time: clarifies the lack of any given schedule or structure and emphasizes creativity and innovation
  • When it’s over, it’s over: encourages the participants not to waste time, but to move on to something else when the fruitful discussion ends
  • The Law of Two Feet: If at any time during our time together you find yourself in any situation where you are neither learning nor contributing, use your two feet. Go to some other place where you may learn and contribute.
  • The open spaces I’ve encountered at software conferences have tended to be one man on a soapbox or several men staring at their navels.  But I digress.

    The notion of an Open Space was formulated by Harrison Owen in the early 1980’s (about the same time Habermas was achieving recognition in America) as a way to recreate the water-cooler conversation.  It is intended to be a space where people come without agendas simply to talk.  The goal of an open space, in its barest form, is to create an atmosphere where ‘talk’, of whatever sort, is generated.

    For me, this has an affinity with Jürgen Habermas’s notion of an ‘Ideal Speech Situation’, which is an idealized community where everyone comes together in a democratic manner and simply talks in order to come to agreement by consensus about the ‘Truth’ – with the postmodern correction that ‘Truth’ is not a metaphysical concept but merely this – a consensus.

    This should come with a warning, however, since in Heidegger’s use of Öffentlichkeit, public sphere | open space | publicness is not a good thing.  Publicness is characteristic of a false way of being that turns each of us (Dasein) into a sort of ‘they’ (Das Man) – the ‘they’ we talk about when we say “they say x” or “they are doing it this way this year.”  According to Heidegger – and take this with a grain of salt since he was a Nazi for a time, after all, and was himself rather untrustworthy – in Being and Time:

    The ‘they’ has its own ways in which to be …

    Thus the particular Dasein in its everydayness is disburdened by the ‘they’.  Not only that; by thus disburdening it of its Being, the ‘they’ accommodates Dasein … if Dasein has any tendency to take things easily and make them easy.  And because the ‘they’ constantly accommodates the particular Dasein by disburdening it of its Being, the ‘they’ retains and enhances its stubborn dominion.

    Everyone is the other, and no one is himself.

    sassy.net: Fall Fashions for .NET Programmers

    janerussell

    This fall programmers are going to be a little more sassy.  Whereas in the past, trendy branding has involved concepts such as paradigms, patterns and rails, principles such as object-oriented programming, data-driven programming, test-driven programming and model-driven architecture, or tags like web 2.0, web 3.0, e-, i-, xtreme and agile, the new fall line features “alternative” and the prefix of choice: alt-.  The point of this is that programmers who work with Microsoft technologies no longer have to do things the Microsoft way.  Instead, they can do things the “Alternative” way, rather than the “Mainstream” way.  In the concrete, this seems to involve using a lot of open source frameworks like NHibernate that have been ported over from Java … but why quibble when we are on the cusp of a new age.

    Personally I think sassy.net is more descriptive, but the alt.net moniker has been cemented by the October 5th alt.net conference.  David Laribee is credited with coining the term earlier this year in this blog post, as well as explicating it in the following way:

    What does it mean to be to be ALT.NET? In short it signifies:

    1. You’re the type of developer who uses what works while keeping an eye out for a better way.
    2. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc.
    3. You’re not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc.
    4. You know tools are great, but they only take you so far. It’s the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.)

    This is almost identical to my manifesto for sassy.net, except that I included a fifth item about carbon neutrality and a sixth one about loving puppies.  To Dave’s credit, his manifesto is a bit more succinct.

    There are a several historical influences on this new fall line.  One is the suspicion that new Microsoft technologies have been driven by a desire to sell their programming frameworks rather than to create good tools.  An analogy can be drawn with the development of the QWERTY standard for the English-language keyboard.  Why are the keys laid out the way the are?  One likely possibility is that all the keys required to spell out “t-y-p-e-w-r-i-t-e-r” can be found on the top row, which is very convenient for your typical typewriter salesman.  Several of the RAD (Rapid Application Development — an older fall line that is treated with a level of contempt some people reserve for Capri pants) tools that have come out of Microsoft over the past few years have tended to have a similar quality.  They are good for sales presentations but are not particularly useful for real world development.  Examples that come to mind are the call-back event model for .NET Remoting (the official Microsoft code samples didn’t actually work) and the MSDataSetGenerator, which is great for quickly building a data layer for an existing database, and is almost impossible to tweak or customize for even mildly complex business scenarios.

    A second influence is java-envy.  Whereas the java development tools have always emphasized complex architectures and a low-level knowledge of the language, Microsoft development tools have always emphasized fast results and abstracting the low-level details of their language so the developer can get on with his job.  This has meant that while Java projects can take up to two years, after which you are lucky if you have a working code base, Microsoft-based projects are typically up and running in under six months.  You would think that this would make the Microsoft solution the one people want to work with, but in fact, among developers, it has created Java-envy.  The Java developers are doing a lot of denken work, making them a sort of aristocracy in the coding world, whereas the Microsoft programmers are more or less laborers for whom Microsoft has done much of the thinking.

    Within the Microsoft world, itself, this class distinction has created a sort of mass-migration from VB to C#; these are for the most part equivalent languages, yet VB still has the lingering scent of earth and toil about it.  There are in fact even developers who refuse to use C#, which they see as a still bit prole, and instead prefer to use managed C++.  Whatever, right?

    In 2005, this class distinction became codified with the coining of the term Mort, used by Java developers to describe Microsoft developers, and C# developers to describe VB developers, and by VB.NET developers to describe their atavistic VB6 cousins.  You can think of the Morts as Eloi, happily pumping out applications for their businesses, while the much more clever Morlocks plan out coding architectures and frameworks for the next hundred years.  The alt.net movement grows out of the Morlocks, rather than the Morts, and can in turn be sub-divided between those who simply want to distinguish themselves from the mid-level developers, and those who want to work on betterment projects using coding standards and code reviews to bring the Morts up to their own level. (To be fair, most of the alt.net crowd are of the latter variety, rather than the former.)  The alt.net movement sees following Microsoft standards as a sort of serfdom, and would prefer to come up with their own best-practices, and in some cases tools, for building Microsoft-based software.

    The third influence on the formation of the alt.net movement is the trend in off-shoring software development.  Off-shoring is based on the philosophy that one piece of software development work is equivalent to another, and implicitly that for a given software requirement, one developer is equivalent to another, given that they know the same technology.  The only difference worth considering, then, is how much money one must spend in order to realize that software requirement.

    This has generated a certain amount of soul-searching among developers.  Previously, they had subscribed to the same philosophy, since their usefulness was based on the notion that a piece of software, and implicitly a developer, could do the same work that a roomful of filers (or any other white-collar employee) could do more quickly, more efficiently and hence more cheaply.

    Off-shoring challenged this self-justification for software developer, and created in its place a new identity politics for developers.  A good developer, now, is not to be judged on what he knows at a given moment in time — that is he should not be judged on his current productivity — but rather on his potential productivity — his ability to generate better architectures, more elegant solutions, and other better things over the long that cannot be easily measured, run.  In other words, third-world developers will always be Morts.  If you want high-end software, you need first-world solutions architects and senior developers. 

    To solidify this distinction, however, it is necessary to have some sort of certifying mechanism that will clearly distinguish elite developers from mere Mort wannabes.  At this point, the distinction is only self-selecting, and depends on true alt.net developers being able to talk the talk (as well as determining what the talk is going to be).  Who knows, however, what the future may hold.

    Some mention should also be made concerning the new fall fashions.  Fifties skirts are back in, and the Grace Kelly look will be prevalent.  Whereas last year saw narrow bottom jeans displacing bell bottoms, for this fall anything goes.  This fall we can once again start mixing colors and patterns, rather than stick to a uniform color for an outfit.  This will make accessorizing much more interesting, though you may find yourself spending more time picking out clothes in the morning, since there are now so many more options.  Finally, V-necks are back.  Scoop-necks are out.

    In men’s fashion, making this the fifteenth year in a row, golf shirts and khakis are in.

    Methodology and Methodism


    This is simply a side by side pastiche of the history of Extreme Programming and the history of Methodism.  It is less a commentary or argument than simply an experiment to see if I can format this correctly in HTML.  The history of XP is drawn from Wikipedia.  The history of Methodism is drawn from John Wesley’s A Short History of Methodism.


     








    Software development in the 1990s was shaped by two major influences: internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth as competitive business factors. Rapidly-changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development.


    The Chrysler Comprehensive Compensation project was started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the persistence layer. They brought in Kent Beck, a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several issues they were having with their development process. He took this opportunity to propose and implement some changes in their practices based on his work with his frequent collaborator, Ward Cunningham.


    The first time I was asked to lead a team, I asked them to do a little bit of the things I thought were sensible, like testing and reviews. The second time there was a lot more on the line. I thought, “Damn the torpedoes, at least this will make a good article,” [and] asked the team to crank up all the knobs to 10 on the things I thought were essential and leave out everything else. —Kent Beck


    Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a kind of coach to instill the practices as habits in the C3 team. Information about the principles and practices behind XP was disseminated to the wider world through discussions on the original Wiki, Cunningham’s WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hyper-text system map on the XP website at “www.extremeprogramming.org” circa 1999.


    Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ISBN 0-201-61641-6), spreading his ideas to a much larger, yet very receptive, audience. Authors in the series went through various aspects attending XP and its practices, even a book critical of the practices. Current state XP created quite a buzz in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins.


    Extreme Programming Explained describes Extreme Programming as being:



    • An attempt to reconcile humanity and productivity
    • A mechanism for social change
    • A path to improvement
    • A style of development
    • A software development discipline

    The advocates of XP argue that the only truly important product of the system development process is code (a concept to which they give a somewhat broader definition than might be given by others). Without code you have nothing.

    Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. Code, say the exponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts.

     The high discipline required by the original practices often went by the wayside, causing certain practices to be deprecated or left undone on individual sites.


    Agile development practices have not stood still, and XP is still evolving, assimilating more lessons from experiences in the field. In the second edition of Extreme Programming Explained, Beck added more values and practices and differentiated between primary and corollary practices.


    In November, 1729, four young gentlemen of Oxford — Mr. John Wesley, Fellow of Lincoln College; Mr. Charles Wesley, Student of Christ Church; Mr. Morgan, Commoner of Christ Church; and Mr. Kirkham, of Merton College — began to spend some evenings in a week together, in reading, chiefly, the Greek Testament. The next year two or three of Mr. John Wesley’s pupils desired the liberty of meeting with them; and afterwards one of Mr. Charles Wesley’s pupils. It was in 1732, that Mr. Ingham, of Queen’s College, and Mr. Broughton, of Exeter, were added to their number. To these, in April, was joined Mr. Clayton, of Brazen-nose, with two or three of his pupils. About the same time Mr. James Hervey was permitted to meet with them; and in 1735, Mr. Whitefield.


    The exact regularity of their lives, as well as studies, occasioned a young gentleman of Christ Church to say, “Here is a new set of Methodists sprung up;” alluding to some ancient Physicians who were so called. The name was new and quaint; so it took immediately, and the Methodists were known all over the University.


    They were all zealous members of the Church of England; not only tenacious of all her doctrines, so far as they knew them, but of all her discipline, to the minutest circumstance. They were likewise zealous observers of all the University Statutes, and that for conscience’ sake. But they observed neither these nor anything else any further than they conceived it was bound upon them by their one book, the Bible; it being their one desire and design to be downright Bible-Christians; taking the Bible, as interpreted by the primitive Church and our own, for their whole and sole rule.


    The one charge then advanced against them was, that they were “righteous overmuch;” that they were abundantly too scrupulous, and too strict, carrying things to great extremes: In particular, that they laid too much stress upon the Rubrics and Canons of the Church; that they insisted too much on observing the Statutes of the University; and that they took the Scriptures in too strict and literal a sense; so that if they were right, few indeed would be saved.


    In October, 1735, Mr. John and Charles Wesley, and Mr. Ingham, left England, with a design to go and preach to the Indians in Georgia: But the rest of the gentlemen continued to meet, till one and another was ordained and left the University. By which means, in about two years’ time, scarce any of them were left.


    In February, 1738, Mr. Whitefield went over to Georgia with a design to assist Mr. John Wesley; but Mr. Wesley just then returned to England. Soon after he had a meeting with Messrs, Ingham, Stonehouse, Hall, Hutchings, Kinchin, and a few other Clergymen, who all appeared to be of one heart, as well as of one judgment, resolved to be Bible-Christians at all events; and, wherever they were, to preach with all their might plain, old, Bible Christianity.


    They were hitherto perfectly regular in all things, and zealously attached to the Church of England. Meantime, they began to be convinced, that “by grace we are saved through faith;” that justification by faith was the doctrine of the Church, as well as of the Bible. As soon as they believed, they spake; salvation by faith being now their standing topic. Indeed this implied three things: (1) That men are all, by nature, “dead in sin,” and, consequently, “children of wrath.” (2) That they are “justified by faith alone.” (3) That faith produces inward and outward holiness: And these points they insisted on day and night. In a short time they became popular Preachers. The congregations were large wherever they preached. The former name was then revived; and all these gentlemen, with their followers, were entitled Methodists.


    In March, 1741, Mr. Whitefield, being returned to England, entirely separated from Mr. Wesley and his friends, because he did not hold the decrees. Here was the first breach, which warm men persuaded Mr. Whitefield to make merely for a difference of opinion. Those, indeed, who believed universal redemption had no desire at all to separate; but those who held particular redemption would not hear of any accommodation, being determined to have no fellowship with men that “were in so dangerous errors.” So there were now two sorts of Methodists, so called; those for particular, and those for general, redemption.

    Methodology and its Discontents


    A difficult aspect of software programming is that it is always changing.  This is no doubt true in other fields, also, but there is no tenure system in software development.  Whereas in academe, it is quite plausible to reach a certain point in your discipline and then kick the ladder out behind you, as Richard Rorty was at one point accused of doing, this isn’t so in technology.  In technology, it is the young who set the pace, and the old who must keep up.


    The rapid changes in technology, with which the weary developer must constantly attempt to stay current, can be broken down into two types: there are advances in tools and advances in methodologies.  While there is always a certain amount of fashion determining which tools get used — for instance which is the better language, VB.Net or C#? –, for the most part development tools are judged based upon their effectiveness. This is not necessarily true concerning the methodologies of software development.  Which is odd since one would suppose that the whole point of a method is that it is a way of guaranteeing certain results — use method A, and B (fame, fortune, happiness) will follow.


    There is even something of a backlash against methodism (or, more specifically, certain kinds of methodism) being led, interestingly enough, by tool builders.  For instance, Rocky Lhotka, the creator of the CSLA framework says:



    I am a strong believer in responsibility-driven, behavioral, CRC OO design – and that is very compatible with the concepts of Agile. So how can I believe so firmly in organic OO design, and yet find Agile/XP/SCRUM to be so…wrong…?


    I think it is because the idea of taking a set of practices developed by very high-functioning people, and cramming them into a Methodology (notice the capital M!), virtually guarantees mediocrity. That, and some of the Agile practices really are just plain silly in many contexts.


    Joel Spolsky, the early Microsoft Excel Product Manager and software entrepreneur, has also entered the fray a few times.  Most recently, an blog post by Steve Yegge (a developer at Google) set the technology community on fire:



    Up until maybe a year ago, I had a pretty one-dimensional view of so-called “Agile” programming, namely that it’s an idiotic fad-diet of a marketing scam making the rounds as yet another technological virus implanting itself in naive programmers who’ve never read “No Silver Bullet”, the kinds of programmers who buy extended warranties and self-help books and believe their bosses genuinely care about them as people, the kinds of programmers who attend conferences to make friends and who don’t know how to avoid eye contact with leaflet-waving fanatics in airports and who believe writing shit on index cards will suddenly make software development easier.
    You know. Chumps.


    The general tenor of these skeptical treatments is that Extreme Programming (and its umbrella methodology, Agile) is at best a put-on, and at worst a bit of a cult.  What is lacking in these analyses, however, is an explication of why programming methodologies like Extreme Programming are so appealing.  Software development is a complex endeavor, and its practitioners are always left with a sense that they are being left behind.  Each tool that comes along in order to make development easier always ends up making it actually more difficult and error prone (though also more powerful).  The general trend of software devlopment has consistently been not less complexity, but greater complexity.  This leads to an indemic sense of insecurity among developers, as well as a sense that standing still is tantamount to falling behind.  Developers are, of course, generally well compensated for this burden (unlike, for instance, untenured academics who suffer from the same constraints) so there is no sense in feeling sorry for them, but it should be clear, given this, why there is so much appeal in the notion that there is a secret method which, if they follow it correctly, will grant them admission into the kingdom and relieve them of their anxieties and doubts.


    One of the strangest aspects of these methodologies is the notion that methods need evangelists, yet this is what proponents of these concepts self-consciously call themselves.  There are Extreme Programming evangelists, SCRUM evangelists, Rational evangelists, and so on who write books and tour the country giving multimedia presentations, for a price, about why you should be using their methodology and how it will transform you and your software projects. 


    So are software methodologies the secular equivalent of evangelical revival meetings?  It would depend, I suppose, on whether the purpose of these methodologies is to make money or to save souls.  Let us hope it is the former, and that lots of people are simply making lots of money by advocating the use of these methodologies.