Post hoc, ergo melius hoc

FashionLady

I’m just doing my bit to propagate this viral Latin phrase.  Roughly translated, it means "Newer, therefore better."  It doesn’t appear to have ever been spoken by a citizen of the empire, but rather by Latin scholars in other periods.  There is an obvious and intentional irony in this, since those using this phrase are appealing to the authority of the archaic, the notion that what is older and more obscure is inherently wiser, as they make the claim.  Ergo, it would seem, the phrase is always used ironically.  (An interesting blog-spanning discussion of this phrase can be found here, here, and here.)  Personally, I like it even better in French: Après cela, donc meilleur que cela, due to a predisposition to believe that everything is always better in French, for instance Poe and Bukowski — not that my French is any better than my Latin, which is undoubtedly why I cleave to this peculiar prejudice.

In technology, however, this motto should perhaps be taken at face value.  The beta release of a product is always better than the alpha, the RTM is better than the beta, and the first service pack is generally the first stable release of the product.  Unlike in previous eras, our concept of technology has the notion of progress built into it.  This makes everyone in technology a bit of a trend follower, trying to keep on top of the newest technologies and trying to anticipate what will succeed (Entity Framework) and what will not (Linq to SQL).  Once one begins speaking of trends in technology, however, we naturally undermine the notion of progress a bit, and instead are led back to Descartes’ observation about fashion:

…[J]usques aux modes de nos habits, la même chose qui nous a plu il y a dix ans, et qui nous plaira peut-être encore avant dix ans, nous semble maintenant extravagante et ridicule.

To some extent this is a valid point.  Isn’t SOA simply a return to the type of functional programming that used to be done for mainframes and dummy terminals?  Just because Silverlight is the hottest newest thing in Microsoft development, is this a reason for everyone to jump onto the Silverlight band wagon?  Must we always chase after the shiniest piece of tinsel?

Of course we must.  Those who have been in this profession much longer than I have, who have made the leap from mainframe programming to object-oriented programming to service-oriented programming, who have gone from client-server to n-tier to distributed programming with WCF, have learned that it is better to take a descriptivist view on the phenomenology of progress rather than a prescriptivist view.  Isn’t this the secret to understanding Darwinian evolution — that it is based on a tautology?  Survival of the fittest determines what exists and what does not; existence, in turn, determines what is fit.  Post hoc, ergo est (et non est hoc).

Perhaps this is the most unfortunate aspect of technological progress.  It robs us of our sense of irony.

Speaking of Legitimation Crises …

conan

In a literary blog I like to follow called The Valve, a recent post asks why the world of public intellectuals is now dominated by scientists like Richard Dawkins and Steven Pinker rather than by literary critics.

"The culture wars so damaged literature as a source of cultural authority that literary intellectuals lost the public stage. They were replaced by scientific popularizers such as Steven Jay Gould, Richard Dawkins, and Steven Pinker – cf. literary agent John Brockman on the third culture. In this climate of opinion, it is not enough to return to evaluative criticism.

"My own dog in this fight is a general academic rehabilitation of normativity (so-called), and not just in literature, as well as a return to generalism, by which I mean “writing for a well-informed non-specialist audience” (and by which I do not mean “writing for stupid,uneducated people who will never really understand the sophisticated stuff we do.")

"This would involved the renunciation of the positivist dream of grounding everything on Science and Truth. It would be a less resolvable, more plural discourse. "

Perhaps it is appropriate that the question of why we don’t pay more attention to literary theorists would only occur to other literary theorists.  At the same time, it raises the question of why other professionals don’t attempt to grab for this particular ring of public legitimization.  Pundits on TV, not surprisingly, are pulled from the pool of people who decide early in their careers that rather than actually making policy, they want to talk about it.  Moreover, they have decided that rather than taking the somewhat more "legitimate" tack of going into print journalism, they want to do it in the most mediocre medium available — television.  It actually pays off, since in this case, to paraphrase Marshall MacLuhan, the medium is the messenger.

But my purpose here is not to shoot the messenger.  It is rather to wonder why other professionals don’t feel this entitlement to speak for others over matters concerning which they have no expertise.  Tech people certainly feel they have more insight into policy and long-term planning given their unique vantage point upon the ways technology transforms the workplace as well as our very sense of time.  Why don’t they chomp at the bit and demand that people pay more attention to them?  Doctors, more than any other profession, take for granted their God-like role in determining who lives and who dies based on their insurance coverage.  Should they not be afforded the opportunity to make oracular pronouncements about the health of the nation?  Lawyers recognize that the only truth is the truth they are able to argue before an appropriate audience.  Shall they be given the chance to argue before the citizenry?

Yet it is only the lit crit folk– those peculiar scholars who work in the butt cracks of philosophy — that feel an entitlement about making public declamations.  Moreover, they are in the unusual position of feeling that somehow this entitlement has been taken away from them.  How did this ever happen?

Impedance Mismatch

impedance_mismatch

Impedance mismatch is a concept from electronics that is gaining some mindshare as an IT metaphor.  It occupies the same social space that cognitive dissonance once did, and works in pretty much the same way to describe any sort of discontinuity.  It is currently being used in IT to describe the difficulty inherent in mapping relational structures, such as relational databases, to object structures common in OOP.  It is shorthand for a circumstance in which two things don’t fit.

Broadening the metaphor a bit, here is my impedance mismatch.  I like reading philosophy.  Unfortunately, I also like reading comic books.  I’m not a full-blown collector or anything.  I pick up comics from the library, and occasionally just sit in the bookstore and catch up on certain franchises I like.  I guess that in the comic book world, I’m the equivalent of someone who only drinks after sun-down or only smokes when someone hands him a cigarette, but never actually buys a pack himself.  A parasite, yes, but not an addict.

The impedance mismatch comes from the sense that I shouldn’t waste time reading comics.  They do not inhabit the same mental world that the other things I like to read do. I often sit thinking that I ought be reading Schopenhauer, with whom I am remarkably unfamiliar for a thirty-something, or at least reading through Justin Smith’s new book on WCF Programming, but instead find myself reading an Astro City graphic novel because Rocky Lhotka recommended it to me.  The problem is not that I feel any sort of bad faith about reading comic books when I ought to be reading something more mature.  Rather, I fear that I am actually being true to myself.

A passage from the most recent New Yorker in an article by Jonathan Rosen nicely illustrates this sort of impedance mismatch:

Sometime in 1638, John Milton visited Galileo Galilei in Florence. The great astronomer was old and blind and under house arrest, confined by order of the Inquisition, which had forced him to recant his belief that the earth revolves around the sun, as formulated in his “Dialogue Concerning the Two Chief World Systems.” Milton was thirty years old—his own blindness, his own arrest, and his own cosmological epic, “Paradise Lost,” all lay before him. But the encounter left a deep imprint on him. It crept into “Paradise Lost,” where Satan’s shield looks like the moon seen through Galileo’s telescope, and in Milton’s great defense of free speech, “Areopagitica,” Milton recalls his visit to Galileo and warns that England will buckle under inquisitorial forces if it bows to censorship, “an undeserved thraldom upon learning.”

Beyond the sheer pleasure of picturing the encounter—it’s like those comic-book specials in which Superman meets Batman—there’s something strange about imagining these two figures inhabiting the same age.

Authority as Anti-Pattern

authority

There has been a recent spate of posts about authority in the world of software development, with some prominent software bloggers denying that they are authorities.  They prefer to be thought of as intense amateurs.

I worked backwards to this problematic of authority starting with Jesse Liberty.  Liberty writes reference books on C# and ASP.NET, so he must be an authority, right?  And if he’s not an authority, why should I read his books?  This led to  Scott Hanselman, to Alastair Rankine and finally to Jeff Atwood at CodingHorror.com.

The story, so far, goes like this.  Alastair Rankine posts that Jeff Atwood has jumped the shark on his blog by setting himself up as some sort of authority.  Atwood denies that he is any sort of authority, and tries to cling to his amateur status like a Soviet-era Olympic poll vaulter.  Scott Hanselman chimes in to insist that he is also merely an amateur, and Jesse Liberty (who is currently repackaging himself from being a C# guru to a Silverlight guru) does an h/t to Hanselman’s post.  Hanselman also channels Martin Fowler, saying that he is sure Fowler would also claim amateur status.

Why all this suspicion of authority?

The plot thickens, since Jeff Atwood’s apologia, upon being accused by Rankine of acting like an authority, is that indeed he is merely "acting". 

"It troubles me greatly to hear that people see me as an expert or an authority…

"I suppose it’s also an issue of personal style. To me, writing without a strong voice, writing filled with second guessing and disclaimers, is tedious and difficult to slog through. I go out of my way to write in a strong voice because it’s more effective. But whenever I post in a strong voice, it is also an implied invitation to a discussion, a discussion where I often change my opinion and invariably learn a great deal about the topic at hand. I believe in the principle of strong opinions, weakly held…"

To sum up, Atwood isn’t a real authority, but he plays one on the Internet.

Here’s the flip side to all of this.  Liberty, Hanselman, Atwood, Fowler, et. al. have made great contributions to software programming.  They write good stuff, not only in the sense of being entertaining, but also in the sense that they shape the software development "community" and how software developers — from architects down to lowly code monkeys — think about coding and think about the correct way to code.  In any other profession, this is the very definition of "authority".

In literary theory, this is known as authorial angst.  It occurs when an author doesn’t believe in his own project.  He does what he can, and throws it out to the world.  If his work achieves success, he is glad for it, but takes it as a chance windfall, rather than any sort of validation of his own talents.  Ultimately, success is a bit perplexing, since there are so many better authors who never achieved success in their own times, like Celine or Melville.

One of my favorite examples of this occurs early in Jean-Francois Lyotard’s The Postmodern Condition in which he writes that he knows the book will be very successful, if only because of the title and his reputation, but …  The most famous declaration of authorial angst is found in Mark Twain’s notice inserted into The Adventures of Huckleberry Finn:

"Persons attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot."

In Jeff Atwood’s case, the authority angst seems to take the following form: Jeff may talk like an authority, and you may take him for an authority, but he does not consider himself one.  If treating him like an authority helps you, then that’s all well and good.  And if it raises money for him, then that’s all well and good, too.  But don’t use his perceived authority as a way to impugn his character or to discredit him.  He never claimed to be one.  Other people are doing that.

[The French existentialists are responsible for translating Heidegger’s term angst as ennui, by the way, which has a rather different connotation (N is for Neville who died of ennui).  In a French translation class I took in college, we were obliged to try to translate ennui, which I did rather imprecisely as "boredom".  A fellow student translated it as "angst", for which the seminar tutor accused her of tossing the task of translation over the Maginot line.  We finally determined that the term is untranslatable.  Good times.]

The problem these authorities have with authority may be due to the fact that authority is a role.  In Alasdaire MacIntyre’s After Virtue, a powerful critique of what he considers to be the predominant ethical philosophy of modern times, Emotivism, MacIntyre argues that the main characteristics (in Shaftesbury’s sense) of modernity are the Aesthete, the Manager and the Therapist.  The aesthete replaces morals as an end with a love of patterns as an end.  The manager eschews morals for competence.  The therapist overcomes morals by validating our choices, whatever they may be.  These characters are made possible by the notion of expertise, which MacIntyre claims is a relatively modern invention.

"Private corporations similarly justify their activities by referring to their possession of similar resources of competence.  Expertise becomes a commodity for which rival state agencies and rival private corporations compete.  Civil servants and managers alike justify themselves and their claims to authority, power and money by invoking their own competence as scientific managers of social change.  Thus there emerges an ideology which finds its classical form of expression in a pre-existing sociological theory, Weber’s theory of bureaucracy."

To become an authority, one must begin behaving like an authority.  Some tech authors such as Jeffrey Richter and Juval Lowy actually do this very well.  But sacrifices have to be made in order to be an authority, and it may be that this is what the anti-authoritarians of the tech world are rebelling against.  When one becomes an authority, one must begin to behave differently.  One is expected to have a certain realm of competence, and when one acts authoritatively, one imparts this sense of confidence to others: to developers, as well as the managers who must oversee developers and justify their activities to upper management.

Upper management is already always a bit suspicious of the software craft.  They tolerate certain behaviors in their IT staff based on the assumption that they can get things done, and every time a software project fails, they justifiably feel like they are being hoodwinked.  How would they feel about this trust relationship if they found out that one of the figures their developers are holding up as an authority figure is writing this:

"None of us (in software) really knows what we’re doing. Buildings have been built for thousands of years and software has been an art/science for um, significantly less (yes, math has been around longer, but you know.) We just know what’s worked for us in the past."

This resistance to putting on the role of authority is understandable.  Once one puts on the hoary robes required of an authority figure, one can no longer be oneself anymore, or at least not the self one was before.  Patrick O’Brien describes this emotion perfectly as he has Jack Aubrey take command of his first ship in Master and Commander.

"As he rowed back to the shore, pulled by his own boat’s crew in white duck and straw hats with Sophie embroidered on the ribbon, a solemn midshipman silent beside him in the sternsheets, he realized the nature of this feeling.  He was no longer one of ‘us’: he was ‘they’.  Indeed, he was the immediately-present incarnation of ‘them’.  In his tour of the brig he had been surrounded with deference — a respect different in kind from that accorded to a lieutenant, different in kind from that accorded to a fellow human being: it had surrounded him like a glass bell, quite shutting him off from the ship’s company; and on his leaving the Sophie had let out a quiet sigh of relief, the sigh he knew so well: ‘Jehovah is no longer with us.’

"It is the price that has to be paid,’ he reflected."

It is the price to be paid not only in the Royal Navy during the age of wood and canvas, but also in established modern professions such as architecture and medicine.  All doctors wince at recalling the first time they were called "doctor" while they interned.  They do not feel they have the right to wear the title, much less be consulted over a patient’s welfare.  They feel intensely that this is a bit of a sham, and the feeling never completely leaves them.  Throughout their careers, they are asked to make judgments that affect the health, and often even the lives, of their patients — all the time knowing that their’s is a human profession, and that mistakes get made.  Every doctor bears the burden of eventually killing a patient due to a bad diagnosis or a bad prescription or simply through lack of judgment.  Yet bear it they must, because gaining the confidence of the patient is also essential to the patient’s welfare, and the world would likely be a sorrier place if people didn’t trust doctors.

So here’s one possible analysis: the authorities of the software engineering profession need to man up and simply be authorities.  Of course there is bad faith involved in doing so.  Of course there will be criticism that they frauds.  Of course they will be obliged to give up some of the ways they relate to fellow developers once they do so.  This is true in every profession.  At the same time every profession needs its authorities.  Authority holds a profession together, and it is what distinguishes a profession from mere labor.  The gravitational center of any profession is the notion that there are ways things are done, and there are people who know what those ways are.  Without this perception, any profession will fall apart, and we will indeed be merely playaz taking advantage of middle management and making promises we cannot fulfill.  Expertise, ironically, explains and justifies our failures, because we are able to interpret failure as a lack of this expertise.  We then drive ourselves to be better.  Without the perception that there are authorities out there, muddling and mediocrity become the norm, and we begin to believe that not only can we not do better, but we aren’t even expected to.

This is a traditionalist analysis.  I have another possibility, however, which can only be confirmed through the passage of time.  Perhaps the anti-authoritarian impulse of these crypto-authorities is a revolutionary legacy of the soixantes-huitards.  From Guy Sorman’s essay about May ’68, whose fortieth anniversary passed unnoticed:

"What did it mean to be 20 in May ’68? First and foremost, it meant rejecting all forms of authority—teachers, parents, bosses, those who governed, the older generation. Apart from a few personal targets—General Charles de Gaulle and the pope—we directed our recriminations against the abstract principle of authority and those who legitimized it. Political parties, the state (personified by the grandfatherly figure of de Gaulle), the army, the unions, the church, the university: all were put in the dock."

Just because things have been done one way in the past doesn’t mean this is the only way.  Just because authority and professionalism are intertwined in every other profession, and perhaps can longer be unraveled at this point, doesn’t mean we can’t try to do things differently in a young profession like software engineering.  Is it possible to build a profession around a sense of community, rather than the restraint of authority?

I once read a book of anecdotes about the 60’s, one of which recounts a dispute between two groups of people in the inner city.  The argument is about to come to blows when someone suggests calling the police.  This sobers everyone up, and with cries of "No pigs, no pigs" the disputants resolve their differences amicably.  The spirit that inspired this scene, this spirit of authority as anti-pattern, is no longer so ubiquitous, and one cannot really imagine civil disputes being resolved in such a way anymore.  Still, the notion of a community without authority figures is a seductive one, and it may even be doable within a well-educated community such as the web-based world of software developers.  Perhaps it is worth trying.  The only thing that concerns me is how we are to maintain the confidence of management as we run our social experiment.

What is Service Oriented Architecture?

la_condition_humaine

What is SOA?  It is currently the hottest thing going on in corporate technology, and promises to simultaneously integrate disparate applications on multiple platforms as well as provide code reuse to all of those platforms.  According to Juval Lowy, it is the culmination of a 20 year project to enable true component-based design — in other words, the fulfillment of COM, rather than merely its replacement.  Others see it as a threat to object oriented programming. According to yet others, it is simply the wave of the future.  Rocky Lhotka recently remarked at a users-group meeting that it reminds him of mainframe programming.  In Windows Communication Foundation Unleashed, the authors write somewhat uncharitably:

"Thomas Erl, for instance, published two vast books ostensibly on the subject, but never managed to provide a noncircuitous definition of the approach in either of them."

This diversity of opinion, I believe, gives me an opening to offer my own definition of SOA.  SOA is, put simply, the triumph of the Facade pattern.

In the 90’s, Erich Gamma, Ralph Johnson, John Vlissides and Richard Helm popularized the notion of the 23 fundamental design patterns of object oriented programming.  I’ve often wondered why they came up with 23 patterns.  Some, such as the Flyweight pattern, are simply never used.  At the same time, one of the most popular patterns, MVP, doesn’t even make the canonical list.  How did they come up with 23?

Here’s an article on the significance of the number 23 which may or may not shed light on the Gang of Four’s motivation.  In Peter Greenaway’s A Zed and Two Noughts, the characters become obsessed with the number 23, and claim that there are 23 letters in the Greek alphabet and that Vermeer created 23 paintings (both false, by the way).  Perhaps the Gang of Four are Discordians — Discordians are fascinated by what they call the 23 Enigma.

In any case, they came up with 23 canonical (or "fundamental" or "classic") design patterns, and in the past decade, knowing these patterns has become the unofficial dividing line between the common run of code monkeys (I use the term affectionately) and so-called "true" developers — the initiation rite that turns boy programmers into men.  Anyone in development who wants to be anybody makes the attempt to learn them, but for whatever reason, the 23 patterns resist the attempt — sometimes because it is difficult to see how you would ever actually use them.  It helps, however, to remember that the StringBuilder type in C# is based on the Builder pattern, and that the Clone method on most types implements the Prototype pattern.  Delegates are built around the Observer pattern and collections are built around the Iterator pattern — but since these are both basically part of the C# language, among others, you don’t really need to learn them anymore.  In my opinion, the most useful patterns are the Template and the Factory Method.  The Singleton pattern, on the other hand, starts off seeming like a useful pattern but turns out not to be — a bit like a bad joke one eventually tires of.  It is, however, easy to remember, if somewhat tricky to implement.

The one pattern no one ever fails to remember is the Facade pattern.  It doesn’t do anything clever with abstract base classes or interfaces.  It doesn’t have tricky implementation details.  It simply takes the principle of encapsulation and goes crazy with it. Whatever complicated code you have, you place it behind a wall of code, called the Facade, which provides methods to manipulate your "real" code.  It’s the sort of pattern which, like Monsieur Jordan, once you find out about it you realize you’ve been doing it all your life.  The simplicity and ubiquity of the Facade makes it an unattractive pattern — it takes no programming acumen to learn it; it requires great effort to avoid it. It is the dumbest of the 23 canonical design patterns.

And Service Oriented Architecture is all built around it.  In some sense, SOA marks the democratization of architecture.  There are still tricks to planning a good SOA, and securing it may require some sophistication — but with SOA, anyone can be an architect.  Well … anyone who can build a Facade.

Agile Methodology and Promiscuity

lacan

The company I am currently consulting with uses Scrum, a kind of Agile methodology.  I like it.  Its main features are index cards taped to a wall and quick "sprints", or development cycles.  Scrum’s most peculiar feature is the notion of a "Scrum Master", which makes me feel dirty whenever I think of it.  It’s so much a part of the methodology, however, that you can even become certified as a "Scrum Master", and people will put it on their business cards.  Besides Scrum, other Agile methodologies include Extreme Programming (XP) and the Rational Unified Process (RUP) which is actually more of a marketing campaign than an actual methodology — but of course you should never ever say that to a RUP practitioner.

The main thing that seems to unify these Agile methodologies is the fact that they are not Waterfall.  And because Waterfall is notoriously unsuccessful, except when it is successful, Agile projects are generally considered to be successful, except when they aren’t.  And when they aren’t, there are generally two explanations that can be given for the lack of success.  First, the flavor of Agile being practiced wasn’t practiced correctly.  Second, the agile methodology was followed too slavishly, when at the heart of agile is the notion that it must be adapted to the particular qualities of a particular project.

In a recent morning stand up (yet another Scrum feature) the question was raised about whether we were following Scrum properly, since it appeared to some that we were introducing XP elements into our project management.  Even before I had a chance to think about it, I found myself appealing to the second explanation of Agile and arguing that it was a danger to apply Scrum slavishly.  Instead, we needed to mix and match to find the right methodology for us.

A sense of shame washed over me even as I said it, as if I were committing some fundamental category mistake.  However, my remarks were accepted as sensible and we moved on.

For days afterward, I obsessed about the cause of my sense of shame.  I finally worked it up to a fairly thorough theory.  I decided that it was rooted in my undergraduate education and the study of Descartes, who claimed that just as a city designed by one man is eminently more rational than one built through aggregation over ages, so the following of a single method, whether right or wrong, will lead to more valid results than philosophizing willy-nilly ever will.  I also thought of how Kant always filled me with a sense of contentment, whereas Hegel, who famously said against Kant that whenever we attempt to draw lines we always find ourselves crossing over them, always left me feeling uneasy and disoriented.  Along with this was the inappropriate (philosophically speaking) recollection that Kant died a virgin, whereas Hegel’s personal life was marked by drunkenness and carousing.  Finally I thought of Nietzsche, whom Habermas characterized as one of the "dark" philosophers for, among other things, insisting that one set of values were as good as another and, even worse, arguing in The Genealogy of Morals that what we consider to be noble in ourselves is in fact base, and what we consider moral weakness is in fact spiritual strength — a transvaluation of all values.  Nietzsche not only crossed the lines, but so thoroughly blurred them that we are still trying to recover them after almost a century and a half.

But lines are important to software developers — we who obsess about interfaces and abhor namespace collisions the way Aristotle claimed nature abhors a vacuum — as if there were nothing worse than the same word meaning two different things.  We are also obsessed with avoiding duplication of code — as if the only thing worse than the same word meaning two different things is the same thing being represented by two different words.  What a reactionary, prescriptivist, neurotic bunch we all are.

This seemed to explain it for me.  I’ve been trained to revere the definition, and to form fine demarcations in my mind.  What could be more horrible, then, than to casually introduce the notion that not only can one methodology be exchanged for another, but that they can be mixed and matched as one sees fit.  Like wearing a brown belt with black shoes, this fundamentally goes against everything thing I’ve been taught to believe not only about software, but also about the world.  If we allow this one thing, it’s a slippery slope to Armageddon and the complete dissolution of civil society.

Then I recalled Slavoj Zizek’s introduction to one of his books about Jacques Lacan (pictured above), and a slightly different sense of discomfort overcame me.  I quote it in part:

I have always found extremely repulsive the common practice of sharing the main dishes in a Chinese restaurant.  So when, recently, I gave expression to this repulsion and insisted on finishing my plate alone, I became the victim of an ironic "wild psychoanalysis" on the part of my table neighbor: is not this repulsion of mine, this resistance to sharing a meal, a symbolic form of the fear of sharing a partner, i.e., of sexual promiscuity?  The first answer that came to my mind, of course, was a variation on de Quincey’s caution against the "art of murder" — the true horror is not sexual promiscuity but sharing a Chinese dish: "How many people have entered the way of perdition with some innocent gangbang, which at the time was of no great importance to them, and ended by sharing the main dishes in a Chinese restaurant!"

The Aesthetics and Kinaesthetics of Drumming

 sheetmusic

Kant’s Critique of Judgment, also know as the Third Critique since it follows the first on Reason and the second on Morals, is a masterpiece in the philosophy of aesthetics.  With careful reasoning, Kant examines the experience of aesthetic wonder, The Sublime, and attempts to relate it to the careful delineations he has made in his previous works between the phenomenal and noumenal realms.  He appears to allow in the Third Critique what he denies us in the First: a way to go beyond mere experience in order to perceive a purpose in the world.  Along the way, he passes judgment on things like beauty and genius that left an indelible mark on the Romanticism of the 19th century.

Taste, like the power of judgment in general, consists in disciplining (or training) genius.  It severely clips its wings, and makes it civilized, or polished; but at the same time it gives it guidance as to how far and over what it may spread while still remaining purposive.  It introduces clarity and order into a wealth of thought, and hence makes the ideas durable, fit for approval that is both lasting and universal, and hence fit for being followed by others…

Kant goes on to say that where taste and genius conflict, a sacrifice needs be made on the side of genius.

in his First Critique, Kant discusses the "scandal of philosophy" — that after thousands of years philosophers still cannot prove what every simple person knows — that the external world is real.  There are other scandals, too, of course.  There are many questions which, after thousands of years, philosophers continue to argue over and, ergo, for which they have no definitive answers.  There are also the small scandals which give an aspiring philosophy student pause, and make him wonder if the philosophizing discipline isn’t a fraud and a sham after all, such as Martin Heidegger’s Nazi affiliation.  Here the question isn’t why he didn’t realize what every simple German should have known, since even the simple Germans were quite taken up with the movement.  What leaves a bad taste, however, is the sense that a great philosopher should have known better.

A minor scandal concerns Immanuel Kant’s infamous lack of taste.  When it came to music, he seems to have a particular fondness for martial music, das heist, marching bands with lots of drumming and brass.  He discouraged his students from learning to actually play music because he felt it was too time consuming.   We might say that in his personal life, when his taste and his genius came into conflict, Kant chose to sacrifice his taste.

I think I will, also.  In Rock Band, the drums are notoriously the most difficult instrument to play well.  It is also the faux instrument that most resembles the real thing, and it is claimed by some that if you become a good virtual drummer, you will also in the process become a good real drummer.  I’ve tried it but I can’t get beyond the Intermediate level.  I can sing and play guitar on hard, but the drums have a sublime complexity that exceed my abilities to cope.  With uncanny timing Wired magazine has come out with a walkthrough for the drums in Rock Band (h/t to lifehacker.com).  It mostly concerns working with the kick pedal and two alternative techniques, heel-up and heel-down (wax-on/wax-off?) for working with it.  It involves a bit of geometry and a lot of implicit physics.  I would have liked a little more help with figuring out the various rhythm techniques, but according to wired, I would get the best results by simply learning real drum techniques, either with an instructor or through YouTube. 

I wonder what Kant would say about that.

A Sequel to Wagner’s "Effective C#" in the works

indiana-jones-fedora

Can a sequel be better than the original?  With movies this is usually not the case, though we are all holding our breaths for the new installment in the  Indiana Jones franchise.  Technical books, however, are a different matter.  They have to be updated on a regular basis because the technology changes so rapidly.  My bookshelf is full of titles like Learning JAVA 1.3  and Professional Active Server Pages 2.0 which, to be frank, are currently useless.  Worse, they are heavy and take up a lot of room.  I’ve tried to throw them away, but the trash service refuses to take them due to environmental concerns, and there isn’t a technical books collection center in my area.  In Indiana Jones and the Last Crusade (made before the word "Crusade" got a bad rap) there is a comic scene of a book burning in Berlin, and though I am not in favor of book burnings in general — you’d think we would have learned our lesson after the Library of Alexandria burned down — still, occasionally, I dream of building a bonfire around COM Programming for Dummies and its ilk.

Scott Hanselman recently posted asking about the great technical books of the past ten years, and one of the titles that came up repeatedly is Bill Wagner’s Effective C#: 50 Specific Ways to Improve Your C#.  The book is great for .NET programmers because it goes beyond simply explaining how to write Hello, world! programs, but instead tries to show how one can become a better developer.  The conceit of the book is simple.  For each of his 50 topics, he explains that there are at least two ways to accomplish a given task, and then explains why you should prefer one way to the other.  In the process of going through five or six of these topics, the reader comes to realize that what Bill Wagner is actually doing is explaining what makes for good code, and when both paths are equally good,what makes for elegant code.  This helps the reader to form a certain habit of thinking concerning his own code.  The novice programmer is constantly worried about finding the right way to write code.  The experienced programmer already knows the various right ways to do a given task, and becomes preoccupied with finding the better way.

The way I formulated that last thought is a bit awkward.  I think I could have written it better.  A semicolon is probably in order, and the sentences should be shorter.  Perhaps

The novice programmer is preoccupied with finding the right way to perform a task; the experienced programmer knows that there are various right ways, and is more concerned with finding the most elegant way.

or maybe

The novice is preoccupied with finding the right way to get something done; the expert is aware that in programming there are always many paths, and his objective is to find the most elegant one.

Alas I am no Le Rochefoucauld, but you get the idea.  This is something that prose writers have always considered a part of their craft.  Raymond Queneau once wrote an amazing book that simply takes the same scene on a bus and reformulates it some fifty times.  Perhaps Amazon can pair up Bill Wagner’s Effective C# with Queneau’s Exercises in Style in one of their "…or buy both for only…" deals, since they effectively reinforce the same point in two different genres, to wit: there is no best way to write, but there is always a better way.

If you do get on a Queneau kick, moreover, then I highly recommend this book, a pulp novel about Irish terrorists, which has a remarkably un-PC title, and for which reason I am not printing it here.  I assure you, the contents are better than the title.

The only shortcoming of Bill Wagner’s book is that it was written for C# 1.0, while we are currently at iteration 3.0.  It is still a remarkably useful book that has aged well — but alas, it has aged.  It was with great excitement, then, that I read on Bill’s blog that he is currently working on a title called More Effective C# available for pre-order on Amazon and as a Rough Cut on SafariBooksOnline

The current coy subtitle is (#TBD) Specific Ways to Improve Your C#. To fulfill the promise implicit in the book’s title, More Effective C#, doesn’t the final #TBD number of Specific Ways have to be at least 51?

Using statements with unknown types

I recently came across some interesting code in Juval Löwy’s Programming WCF Services and wanted to share.  It’s simply something I had never run across before:

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

The first thing to notice is that the proxy object is instantiated outside of the using block.  I don’t think I’ve ever actually tried this, but it is perfectly permissible (if not recommended).  I used a dissembler to look at the IL this generates, and it is pretty much the same as instantiating the proxy object inside of the using brackets.  The main difference is that in this case, the scope of the proxy object extends beyond the using block.

Within the using brackets, this code casts the proxy object to the IDisposable interface so the Dispose method will be available.  Since a Using Block is basically syntactic sugar for a try-catch-finally structure that calls an object’s Dispose method in the finally block, the equivalent try-catch-finally block would look like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                ((IDisposable)proxy).Dispose();

            }

 

However, Juval’s using statement does one additional thing.  It also checks to see if the proxy object even implements the IDisposable interface.  If it does, then the Dispose method is called on it.  If it does not, then nothing happens in the finally block.  The equivalent full blown code, then, would actually look something like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                IDisposable disposable = proxy as IDisposable;

                if (disposable != null)

                {

                    disposable.Dispose();

                }

 

            }

 

… and we’ve condensed it to this …

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

It’s probably not something that will come up too often, but if you have a situation in which you do not know whether an object implements IDisposable or not, but still want to implement a using block for readability and good coding practice, this is how you would go about doing it.  

Besides Juval’s proxy example, I can imagine it coming in handy when dealing with collections in which you don’t necessarily know whether all of the members of the collection implement IDisposable, for instance:

 

            foreach(IDog dog in myDogsCollection)

            {

                using (dog as IDisposable)

                {

                    dog.Bark();

                }

            }

 

It also just looks really cool.  h/t to Bill Ryan for pointing this out to me.

Learning Silverlight: Day Seven

watson-crick-dna

The seventh day of any project should always be devoted to rest and reflection.

There is a passage from James Watson’s account about discovering the structure of DNA, The Double Helix, in which Watson decides that he wants to study X-ray crystallography (the technique that eventually leads to the discovery of the double helix structure).  He is told by an authority that because the field is so new, there are only five papers on the subject worth reading — five papers to read in order to master an entire field of science!

This is also the current state of Silverlight development.  The final framework is not yet out, the books are still being written as authors hedge their bets on what will be included in the RTM of Silverlight 2.0, and there are no best practices.  Moreover, Silverlight development is so different from what has gone before that no one has a particular leg up on anyone else.  The seasoned ASP.NET developer and the kid fresh out of college are in the same position: either one can become a master of Silverlight, or simply let it slide by and work on other things instead.

So is Silverlight worth learning?  It basically fills in two pre-existing development domains.  One is Flash, and the other is ajax web sites.  It can improve on ajax web sites by simply having a simpler programming model, and by not being dependent on Javascript (as of the 2.0 beta realease) which tends to be brittle.  The update panel in ASP.NET Ajax and the Ajax Control Toolkit have made component based programming of ajax web sites easier, but if you ever read the Microsoft Ajax web forums, you’ll quickly see that it still isn’t easy enough, and people supporting sites that have been up for a year are starting to come forward with their maintenance nightmares.  The introduction of the Microsoft MVC framework raises further questions about whether the webform techniques that so many Microsoft-centric web developers have been working with will continue to be useful in the future. 

Silverlight, in some sense, competes with Flash, since it is a web based vector graphics rendering framework.  It is more convenient than Flash, for many developers, since it can be programmed in .NET languages like C# and VB, rather than requiring a proprietary language like ActionScript.  Even better, it does something that Flash does not do easily.  Silverlight talks to data, and it does so without requiring an expensive server to make this possible. 

When you are thinking about Silverlight, then, it is appropriate to think of a business application with a Flash-like front-end.  This is what it promises, and the technology’s success will rise or fall on its ability to make this happen.

So if you believe in this promise with, say, 65% to 75% conviction, then you will want to learn Silverlight.  There are currently about 5 articles worth reading about it, and they can all be found here.  Most other tutorials you will find on the Internet simply deal with bits and pieces of this information, or else try to pull those bits and pieces together to write cool applications.

But after that, what?  The best thing to do is to start writing applications of your own.  No company is likely to give you a mandate to do this, so you will need to come up with your own project and start chipping away at it.  The easiest path is to do try to copy things that have gone before, but with Silverlight, to see if it can be done.  Many people are currently trying to write games that have already been written better in Flash.  This is a great exercise, and an excellent way to get to know the storyboard element in XAML.  It doesn’t really demonstrate any particular Silverlight capabilities, however.  It’s pretty much just an "I can do it, too" sort of exercise. 

A different route can be taken by rewriting an data aware application that is currently done in Winforms or ASP.NET AJAX, and seeing what happens when you do it in Silverlight instead.  Not as cool as writing games, of course, but it has a bigger wallop in the long run.  This will involve getting to know the various controls that are available for Silverlight and figuring out how to get data-binding working.  (Personally, I’m going to start playing with various interactive fiction frameworks and see how far I can get with that.  It’s a nice project for me in that it brings together both games programming (without fancy graphics) and data-aware applications.)

Finally, after getting through the various Microsoft materials and reading the various books from APress and Wrox and others that will come out shortly, where does one go to keep up with Silverlight techniques and best practices?