Authority as Anti-Pattern

authority

There has been a recent spate of posts about authority in the world of software development, with some prominent software bloggers denying that they are authorities.  They prefer to be thought of as intense amateurs.

I worked backwards to this problematic of authority starting with Jesse Liberty.  Liberty writes reference books on C# and ASP.NET, so he must be an authority, right?  And if he’s not an authority, why should I read his books?  This led to  Scott Hanselman, to Alastair Rankine and finally to Jeff Atwood at CodingHorror.com.

The story, so far, goes like this.  Alastair Rankine posts that Jeff Atwood has jumped the shark on his blog by setting himself up as some sort of authority.  Atwood denies that he is any sort of authority, and tries to cling to his amateur status like a Soviet-era Olympic poll vaulter.  Scott Hanselman chimes in to insist that he is also merely an amateur, and Jesse Liberty (who is currently repackaging himself from being a C# guru to a Silverlight guru) does an h/t to Hanselman’s post.  Hanselman also channels Martin Fowler, saying that he is sure Fowler would also claim amateur status.

Why all this suspicion of authority?

The plot thickens, since Jeff Atwood’s apologia, upon being accused by Rankine of acting like an authority, is that indeed he is merely "acting". 

"It troubles me greatly to hear that people see me as an expert or an authority…

"I suppose it’s also an issue of personal style. To me, writing without a strong voice, writing filled with second guessing and disclaimers, is tedious and difficult to slog through. I go out of my way to write in a strong voice because it’s more effective. But whenever I post in a strong voice, it is also an implied invitation to a discussion, a discussion where I often change my opinion and invariably learn a great deal about the topic at hand. I believe in the principle of strong opinions, weakly held…"

To sum up, Atwood isn’t a real authority, but he plays one on the Internet.

Here’s the flip side to all of this.  Liberty, Hanselman, Atwood, Fowler, et. al. have made great contributions to software programming.  They write good stuff, not only in the sense of being entertaining, but also in the sense that they shape the software development "community" and how software developers — from architects down to lowly code monkeys — think about coding and think about the correct way to code.  In any other profession, this is the very definition of "authority".

In literary theory, this is known as authorial angst.  It occurs when an author doesn’t believe in his own project.  He does what he can, and throws it out to the world.  If his work achieves success, he is glad for it, but takes it as a chance windfall, rather than any sort of validation of his own talents.  Ultimately, success is a bit perplexing, since there are so many better authors who never achieved success in their own times, like Celine or Melville.

One of my favorite examples of this occurs early in Jean-Francois Lyotard’s The Postmodern Condition in which he writes that he knows the book will be very successful, if only because of the title and his reputation, but …  The most famous declaration of authorial angst is found in Mark Twain’s notice inserted into The Adventures of Huckleberry Finn:

"Persons attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot."

In Jeff Atwood’s case, the authority angst seems to take the following form: Jeff may talk like an authority, and you may take him for an authority, but he does not consider himself one.  If treating him like an authority helps you, then that’s all well and good.  And if it raises money for him, then that’s all well and good, too.  But don’t use his perceived authority as a way to impugn his character or to discredit him.  He never claimed to be one.  Other people are doing that.

[The French existentialists are responsible for translating Heidegger’s term angst as ennui, by the way, which has a rather different connotation (N is for Neville who died of ennui).  In a French translation class I took in college, we were obliged to try to translate ennui, which I did rather imprecisely as "boredom".  A fellow student translated it as "angst", for which the seminar tutor accused her of tossing the task of translation over the Maginot line.  We finally determined that the term is untranslatable.  Good times.]

The problem these authorities have with authority may be due to the fact that authority is a role.  In Alasdaire MacIntyre’s After Virtue, a powerful critique of what he considers to be the predominant ethical philosophy of modern times, Emotivism, MacIntyre argues that the main characteristics (in Shaftesbury’s sense) of modernity are the Aesthete, the Manager and the Therapist.  The aesthete replaces morals as an end with a love of patterns as an end.  The manager eschews morals for competence.  The therapist overcomes morals by validating our choices, whatever they may be.  These characters are made possible by the notion of expertise, which MacIntyre claims is a relatively modern invention.

"Private corporations similarly justify their activities by referring to their possession of similar resources of competence.  Expertise becomes a commodity for which rival state agencies and rival private corporations compete.  Civil servants and managers alike justify themselves and their claims to authority, power and money by invoking their own competence as scientific managers of social change.  Thus there emerges an ideology which finds its classical form of expression in a pre-existing sociological theory, Weber’s theory of bureaucracy."

To become an authority, one must begin behaving like an authority.  Some tech authors such as Jeffrey Richter and Juval Lowy actually do this very well.  But sacrifices have to be made in order to be an authority, and it may be that this is what the anti-authoritarians of the tech world are rebelling against.  When one becomes an authority, one must begin to behave differently.  One is expected to have a certain realm of competence, and when one acts authoritatively, one imparts this sense of confidence to others: to developers, as well as the managers who must oversee developers and justify their activities to upper management.

Upper management is already always a bit suspicious of the software craft.  They tolerate certain behaviors in their IT staff based on the assumption that they can get things done, and every time a software project fails, they justifiably feel like they are being hoodwinked.  How would they feel about this trust relationship if they found out that one of the figures their developers are holding up as an authority figure is writing this:

"None of us (in software) really knows what we’re doing. Buildings have been built for thousands of years and software has been an art/science for um, significantly less (yes, math has been around longer, but you know.) We just know what’s worked for us in the past."

This resistance to putting on the role of authority is understandable.  Once one puts on the hoary robes required of an authority figure, one can no longer be oneself anymore, or at least not the self one was before.  Patrick O’Brien describes this emotion perfectly as he has Jack Aubrey take command of his first ship in Master and Commander.

"As he rowed back to the shore, pulled by his own boat’s crew in white duck and straw hats with Sophie embroidered on the ribbon, a solemn midshipman silent beside him in the sternsheets, he realized the nature of this feeling.  He was no longer one of ‘us’: he was ‘they’.  Indeed, he was the immediately-present incarnation of ‘them’.  In his tour of the brig he had been surrounded with deference — a respect different in kind from that accorded to a lieutenant, different in kind from that accorded to a fellow human being: it had surrounded him like a glass bell, quite shutting him off from the ship’s company; and on his leaving the Sophie had let out a quiet sigh of relief, the sigh he knew so well: ‘Jehovah is no longer with us.’

"It is the price that has to be paid,’ he reflected."

It is the price to be paid not only in the Royal Navy during the age of wood and canvas, but also in established modern professions such as architecture and medicine.  All doctors wince at recalling the first time they were called "doctor" while they interned.  They do not feel they have the right to wear the title, much less be consulted over a patient’s welfare.  They feel intensely that this is a bit of a sham, and the feeling never completely leaves them.  Throughout their careers, they are asked to make judgments that affect the health, and often even the lives, of their patients — all the time knowing that their’s is a human profession, and that mistakes get made.  Every doctor bears the burden of eventually killing a patient due to a bad diagnosis or a bad prescription or simply through lack of judgment.  Yet bear it they must, because gaining the confidence of the patient is also essential to the patient’s welfare, and the world would likely be a sorrier place if people didn’t trust doctors.

So here’s one possible analysis: the authorities of the software engineering profession need to man up and simply be authorities.  Of course there is bad faith involved in doing so.  Of course there will be criticism that they frauds.  Of course they will be obliged to give up some of the ways they relate to fellow developers once they do so.  This is true in every profession.  At the same time every profession needs its authorities.  Authority holds a profession together, and it is what distinguishes a profession from mere labor.  The gravitational center of any profession is the notion that there are ways things are done, and there are people who know what those ways are.  Without this perception, any profession will fall apart, and we will indeed be merely playaz taking advantage of middle management and making promises we cannot fulfill.  Expertise, ironically, explains and justifies our failures, because we are able to interpret failure as a lack of this expertise.  We then drive ourselves to be better.  Without the perception that there are authorities out there, muddling and mediocrity become the norm, and we begin to believe that not only can we not do better, but we aren’t even expected to.

This is a traditionalist analysis.  I have another possibility, however, which can only be confirmed through the passage of time.  Perhaps the anti-authoritarian impulse of these crypto-authorities is a revolutionary legacy of the soixantes-huitards.  From Guy Sorman’s essay about May ’68, whose fortieth anniversary passed unnoticed:

"What did it mean to be 20 in May ’68? First and foremost, it meant rejecting all forms of authority—teachers, parents, bosses, those who governed, the older generation. Apart from a few personal targets—General Charles de Gaulle and the pope—we directed our recriminations against the abstract principle of authority and those who legitimized it. Political parties, the state (personified by the grandfatherly figure of de Gaulle), the army, the unions, the church, the university: all were put in the dock."

Just because things have been done one way in the past doesn’t mean this is the only way.  Just because authority and professionalism are intertwined in every other profession, and perhaps can longer be unraveled at this point, doesn’t mean we can’t try to do things differently in a young profession like software engineering.  Is it possible to build a profession around a sense of community, rather than the restraint of authority?

I once read a book of anecdotes about the 60’s, one of which recounts a dispute between two groups of people in the inner city.  The argument is about to come to blows when someone suggests calling the police.  This sobers everyone up, and with cries of "No pigs, no pigs" the disputants resolve their differences amicably.  The spirit that inspired this scene, this spirit of authority as anti-pattern, is no longer so ubiquitous, and one cannot really imagine civil disputes being resolved in such a way anymore.  Still, the notion of a community without authority figures is a seductive one, and it may even be doable within a well-educated community such as the web-based world of software developers.  Perhaps it is worth trying.  The only thing that concerns me is how we are to maintain the confidence of management as we run our social experiment.

What is Service Oriented Architecture?

la_condition_humaine

What is SOA?  It is currently the hottest thing going on in corporate technology, and promises to simultaneously integrate disparate applications on multiple platforms as well as provide code reuse to all of those platforms.  According to Juval Lowy, it is the culmination of a 20 year project to enable true component-based design — in other words, the fulfillment of COM, rather than merely its replacement.  Others see it as a threat to object oriented programming. According to yet others, it is simply the wave of the future.  Rocky Lhotka recently remarked at a users-group meeting that it reminds him of mainframe programming.  In Windows Communication Foundation Unleashed, the authors write somewhat uncharitably:

"Thomas Erl, for instance, published two vast books ostensibly on the subject, but never managed to provide a noncircuitous definition of the approach in either of them."

This diversity of opinion, I believe, gives me an opening to offer my own definition of SOA.  SOA is, put simply, the triumph of the Facade pattern.

In the 90’s, Erich Gamma, Ralph Johnson, John Vlissides and Richard Helm popularized the notion of the 23 fundamental design patterns of object oriented programming.  I’ve often wondered why they came up with 23 patterns.  Some, such as the Flyweight pattern, are simply never used.  At the same time, one of the most popular patterns, MVP, doesn’t even make the canonical list.  How did they come up with 23?

Here’s an article on the significance of the number 23 which may or may not shed light on the Gang of Four’s motivation.  In Peter Greenaway’s A Zed and Two Noughts, the characters become obsessed with the number 23, and claim that there are 23 letters in the Greek alphabet and that Vermeer created 23 paintings (both false, by the way).  Perhaps the Gang of Four are Discordians — Discordians are fascinated by what they call the 23 Enigma.

In any case, they came up with 23 canonical (or "fundamental" or "classic") design patterns, and in the past decade, knowing these patterns has become the unofficial dividing line between the common run of code monkeys (I use the term affectionately) and so-called "true" developers — the initiation rite that turns boy programmers into men.  Anyone in development who wants to be anybody makes the attempt to learn them, but for whatever reason, the 23 patterns resist the attempt — sometimes because it is difficult to see how you would ever actually use them.  It helps, however, to remember that the StringBuilder type in C# is based on the Builder pattern, and that the Clone method on most types implements the Prototype pattern.  Delegates are built around the Observer pattern and collections are built around the Iterator pattern — but since these are both basically part of the C# language, among others, you don’t really need to learn them anymore.  In my opinion, the most useful patterns are the Template and the Factory Method.  The Singleton pattern, on the other hand, starts off seeming like a useful pattern but turns out not to be — a bit like a bad joke one eventually tires of.  It is, however, easy to remember, if somewhat tricky to implement.

The one pattern no one ever fails to remember is the Facade pattern.  It doesn’t do anything clever with abstract base classes or interfaces.  It doesn’t have tricky implementation details.  It simply takes the principle of encapsulation and goes crazy with it. Whatever complicated code you have, you place it behind a wall of code, called the Facade, which provides methods to manipulate your "real" code.  It’s the sort of pattern which, like Monsieur Jordan, once you find out about it you realize you’ve been doing it all your life.  The simplicity and ubiquity of the Facade makes it an unattractive pattern — it takes no programming acumen to learn it; it requires great effort to avoid it. It is the dumbest of the 23 canonical design patterns.

And Service Oriented Architecture is all built around it.  In some sense, SOA marks the democratization of architecture.  There are still tricks to planning a good SOA, and securing it may require some sophistication — but with SOA, anyone can be an architect.  Well … anyone who can build a Facade.

Agile Methodology and Promiscuity

lacan

The company I am currently consulting with uses Scrum, a kind of Agile methodology.  I like it.  Its main features are index cards taped to a wall and quick "sprints", or development cycles.  Scrum’s most peculiar feature is the notion of a "Scrum Master", which makes me feel dirty whenever I think of it.  It’s so much a part of the methodology, however, that you can even become certified as a "Scrum Master", and people will put it on their business cards.  Besides Scrum, other Agile methodologies include Extreme Programming (XP) and the Rational Unified Process (RUP) which is actually more of a marketing campaign than an actual methodology — but of course you should never ever say that to a RUP practitioner.

The main thing that seems to unify these Agile methodologies is the fact that they are not Waterfall.  And because Waterfall is notoriously unsuccessful, except when it is successful, Agile projects are generally considered to be successful, except when they aren’t.  And when they aren’t, there are generally two explanations that can be given for the lack of success.  First, the flavor of Agile being practiced wasn’t practiced correctly.  Second, the agile methodology was followed too slavishly, when at the heart of agile is the notion that it must be adapted to the particular qualities of a particular project.

In a recent morning stand up (yet another Scrum feature) the question was raised about whether we were following Scrum properly, since it appeared to some that we were introducing XP elements into our project management.  Even before I had a chance to think about it, I found myself appealing to the second explanation of Agile and arguing that it was a danger to apply Scrum slavishly.  Instead, we needed to mix and match to find the right methodology for us.

A sense of shame washed over me even as I said it, as if I were committing some fundamental category mistake.  However, my remarks were accepted as sensible and we moved on.

For days afterward, I obsessed about the cause of my sense of shame.  I finally worked it up to a fairly thorough theory.  I decided that it was rooted in my undergraduate education and the study of Descartes, who claimed that just as a city designed by one man is eminently more rational than one built through aggregation over ages, so the following of a single method, whether right or wrong, will lead to more valid results than philosophizing willy-nilly ever will.  I also thought of how Kant always filled me with a sense of contentment, whereas Hegel, who famously said against Kant that whenever we attempt to draw lines we always find ourselves crossing over them, always left me feeling uneasy and disoriented.  Along with this was the inappropriate (philosophically speaking) recollection that Kant died a virgin, whereas Hegel’s personal life was marked by drunkenness and carousing.  Finally I thought of Nietzsche, whom Habermas characterized as one of the "dark" philosophers for, among other things, insisting that one set of values were as good as another and, even worse, arguing in The Genealogy of Morals that what we consider to be noble in ourselves is in fact base, and what we consider moral weakness is in fact spiritual strength — a transvaluation of all values.  Nietzsche not only crossed the lines, but so thoroughly blurred them that we are still trying to recover them after almost a century and a half.

But lines are important to software developers — we who obsess about interfaces and abhor namespace collisions the way Aristotle claimed nature abhors a vacuum — as if there were nothing worse than the same word meaning two different things.  We are also obsessed with avoiding duplication of code — as if the only thing worse than the same word meaning two different things is the same thing being represented by two different words.  What a reactionary, prescriptivist, neurotic bunch we all are.

This seemed to explain it for me.  I’ve been trained to revere the definition, and to form fine demarcations in my mind.  What could be more horrible, then, than to casually introduce the notion that not only can one methodology be exchanged for another, but that they can be mixed and matched as one sees fit.  Like wearing a brown belt with black shoes, this fundamentally goes against everything thing I’ve been taught to believe not only about software, but also about the world.  If we allow this one thing, it’s a slippery slope to Armageddon and the complete dissolution of civil society.

Then I recalled Slavoj Zizek’s introduction to one of his books about Jacques Lacan (pictured above), and a slightly different sense of discomfort overcame me.  I quote it in part:

I have always found extremely repulsive the common practice of sharing the main dishes in a Chinese restaurant.  So when, recently, I gave expression to this repulsion and insisted on finishing my plate alone, I became the victim of an ironic "wild psychoanalysis" on the part of my table neighbor: is not this repulsion of mine, this resistance to sharing a meal, a symbolic form of the fear of sharing a partner, i.e., of sexual promiscuity?  The first answer that came to my mind, of course, was a variation on de Quincey’s caution against the "art of murder" — the true horror is not sexual promiscuity but sharing a Chinese dish: "How many people have entered the way of perdition with some innocent gangbang, which at the time was of no great importance to them, and ended by sharing the main dishes in a Chinese restaurant!"

The Aesthetics and Kinaesthetics of Drumming

 sheetmusic

Kant’s Critique of Judgment, also know as the Third Critique since it follows the first on Reason and the second on Morals, is a masterpiece in the philosophy of aesthetics.  With careful reasoning, Kant examines the experience of aesthetic wonder, The Sublime, and attempts to relate it to the careful delineations he has made in his previous works between the phenomenal and noumenal realms.  He appears to allow in the Third Critique what he denies us in the First: a way to go beyond mere experience in order to perceive a purpose in the world.  Along the way, he passes judgment on things like beauty and genius that left an indelible mark on the Romanticism of the 19th century.

Taste, like the power of judgment in general, consists in disciplining (or training) genius.  It severely clips its wings, and makes it civilized, or polished; but at the same time it gives it guidance as to how far and over what it may spread while still remaining purposive.  It introduces clarity and order into a wealth of thought, and hence makes the ideas durable, fit for approval that is both lasting and universal, and hence fit for being followed by others…

Kant goes on to say that where taste and genius conflict, a sacrifice needs be made on the side of genius.

in his First Critique, Kant discusses the "scandal of philosophy" — that after thousands of years philosophers still cannot prove what every simple person knows — that the external world is real.  There are other scandals, too, of course.  There are many questions which, after thousands of years, philosophers continue to argue over and, ergo, for which they have no definitive answers.  There are also the small scandals which give an aspiring philosophy student pause, and make him wonder if the philosophizing discipline isn’t a fraud and a sham after all, such as Martin Heidegger’s Nazi affiliation.  Here the question isn’t why he didn’t realize what every simple German should have known, since even the simple Germans were quite taken up with the movement.  What leaves a bad taste, however, is the sense that a great philosopher should have known better.

A minor scandal concerns Immanuel Kant’s infamous lack of taste.  When it came to music, he seems to have a particular fondness for martial music, das heist, marching bands with lots of drumming and brass.  He discouraged his students from learning to actually play music because he felt it was too time consuming.   We might say that in his personal life, when his taste and his genius came into conflict, Kant chose to sacrifice his taste.

I think I will, also.  In Rock Band, the drums are notoriously the most difficult instrument to play well.  It is also the faux instrument that most resembles the real thing, and it is claimed by some that if you become a good virtual drummer, you will also in the process become a good real drummer.  I’ve tried it but I can’t get beyond the Intermediate level.  I can sing and play guitar on hard, but the drums have a sublime complexity that exceed my abilities to cope.  With uncanny timing Wired magazine has come out with a walkthrough for the drums in Rock Band (h/t to lifehacker.com).  It mostly concerns working with the kick pedal and two alternative techniques, heel-up and heel-down (wax-on/wax-off?) for working with it.  It involves a bit of geometry and a lot of implicit physics.  I would have liked a little more help with figuring out the various rhythm techniques, but according to wired, I would get the best results by simply learning real drum techniques, either with an instructor or through YouTube. 

I wonder what Kant would say about that.

A Sequel to Wagner’s "Effective C#" in the works

indiana-jones-fedora

Can a sequel be better than the original?  With movies this is usually not the case, though we are all holding our breaths for the new installment in the  Indiana Jones franchise.  Technical books, however, are a different matter.  They have to be updated on a regular basis because the technology changes so rapidly.  My bookshelf is full of titles like Learning JAVA 1.3  and Professional Active Server Pages 2.0 which, to be frank, are currently useless.  Worse, they are heavy and take up a lot of room.  I’ve tried to throw them away, but the trash service refuses to take them due to environmental concerns, and there isn’t a technical books collection center in my area.  In Indiana Jones and the Last Crusade (made before the word "Crusade" got a bad rap) there is a comic scene of a book burning in Berlin, and though I am not in favor of book burnings in general — you’d think we would have learned our lesson after the Library of Alexandria burned down — still, occasionally, I dream of building a bonfire around COM Programming for Dummies and its ilk.

Scott Hanselman recently posted asking about the great technical books of the past ten years, and one of the titles that came up repeatedly is Bill Wagner’s Effective C#: 50 Specific Ways to Improve Your C#.  The book is great for .NET programmers because it goes beyond simply explaining how to write Hello, world! programs, but instead tries to show how one can become a better developer.  The conceit of the book is simple.  For each of his 50 topics, he explains that there are at least two ways to accomplish a given task, and then explains why you should prefer one way to the other.  In the process of going through five or six of these topics, the reader comes to realize that what Bill Wagner is actually doing is explaining what makes for good code, and when both paths are equally good,what makes for elegant code.  This helps the reader to form a certain habit of thinking concerning his own code.  The novice programmer is constantly worried about finding the right way to write code.  The experienced programmer already knows the various right ways to do a given task, and becomes preoccupied with finding the better way.

The way I formulated that last thought is a bit awkward.  I think I could have written it better.  A semicolon is probably in order, and the sentences should be shorter.  Perhaps

The novice programmer is preoccupied with finding the right way to perform a task; the experienced programmer knows that there are various right ways, and is more concerned with finding the most elegant way.

or maybe

The novice is preoccupied with finding the right way to get something done; the expert is aware that in programming there are always many paths, and his objective is to find the most elegant one.

Alas I am no Le Rochefoucauld, but you get the idea.  This is something that prose writers have always considered a part of their craft.  Raymond Queneau once wrote an amazing book that simply takes the same scene on a bus and reformulates it some fifty times.  Perhaps Amazon can pair up Bill Wagner’s Effective C# with Queneau’s Exercises in Style in one of their "…or buy both for only…" deals, since they effectively reinforce the same point in two different genres, to wit: there is no best way to write, but there is always a better way.

If you do get on a Queneau kick, moreover, then I highly recommend this book, a pulp novel about Irish terrorists, which has a remarkably un-PC title, and for which reason I am not printing it here.  I assure you, the contents are better than the title.

The only shortcoming of Bill Wagner’s book is that it was written for C# 1.0, while we are currently at iteration 3.0.  It is still a remarkably useful book that has aged well — but alas, it has aged.  It was with great excitement, then, that I read on Bill’s blog that he is currently working on a title called More Effective C# available for pre-order on Amazon and as a Rough Cut on SafariBooksOnline

The current coy subtitle is (#TBD) Specific Ways to Improve Your C#. To fulfill the promise implicit in the book’s title, More Effective C#, doesn’t the final #TBD number of Specific Ways have to be at least 51?

Using statements with unknown types

I recently came across some interesting code in Juval Löwy’s Programming WCF Services and wanted to share.  It’s simply something I had never run across before:

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

The first thing to notice is that the proxy object is instantiated outside of the using block.  I don’t think I’ve ever actually tried this, but it is perfectly permissible (if not recommended).  I used a dissembler to look at the IL this generates, and it is pretty much the same as instantiating the proxy object inside of the using brackets.  The main difference is that in this case, the scope of the proxy object extends beyond the using block.

Within the using brackets, this code casts the proxy object to the IDisposable interface so the Dispose method will be available.  Since a Using Block is basically syntactic sugar for a try-catch-finally structure that calls an object’s Dispose method in the finally block, the equivalent try-catch-finally block would look like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                ((IDisposable)proxy).Dispose();

            }

 

However, Juval’s using statement does one additional thing.  It also checks to see if the proxy object even implements the IDisposable interface.  If it does, then the Dispose method is called on it.  If it does not, then nothing happens in the finally block.  The equivalent full blown code, then, would actually look something like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                IDisposable disposable = proxy as IDisposable;

                if (disposable != null)

                {

                    disposable.Dispose();

                }

 

            }

 

… and we’ve condensed it to this …

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

It’s probably not something that will come up too often, but if you have a situation in which you do not know whether an object implements IDisposable or not, but still want to implement a using block for readability and good coding practice, this is how you would go about doing it.  

Besides Juval’s proxy example, I can imagine it coming in handy when dealing with collections in which you don’t necessarily know whether all of the members of the collection implement IDisposable, for instance:

 

            foreach(IDog dog in myDogsCollection)

            {

                using (dog as IDisposable)

                {

                    dog.Bark();

                }

            }

 

It also just looks really cool.  h/t to Bill Ryan for pointing this out to me.

Learning Silverlight: Day Seven

watson-crick-dna

The seventh day of any project should always be devoted to rest and reflection.

There is a passage from James Watson’s account about discovering the structure of DNA, The Double Helix, in which Watson decides that he wants to study X-ray crystallography (the technique that eventually leads to the discovery of the double helix structure).  He is told by an authority that because the field is so new, there are only five papers on the subject worth reading — five papers to read in order to master an entire field of science!

This is also the current state of Silverlight development.  The final framework is not yet out, the books are still being written as authors hedge their bets on what will be included in the RTM of Silverlight 2.0, and there are no best practices.  Moreover, Silverlight development is so different from what has gone before that no one has a particular leg up on anyone else.  The seasoned ASP.NET developer and the kid fresh out of college are in the same position: either one can become a master of Silverlight, or simply let it slide by and work on other things instead.

So is Silverlight worth learning?  It basically fills in two pre-existing development domains.  One is Flash, and the other is ajax web sites.  It can improve on ajax web sites by simply having a simpler programming model, and by not being dependent on Javascript (as of the 2.0 beta realease) which tends to be brittle.  The update panel in ASP.NET Ajax and the Ajax Control Toolkit have made component based programming of ajax web sites easier, but if you ever read the Microsoft Ajax web forums, you’ll quickly see that it still isn’t easy enough, and people supporting sites that have been up for a year are starting to come forward with their maintenance nightmares.  The introduction of the Microsoft MVC framework raises further questions about whether the webform techniques that so many Microsoft-centric web developers have been working with will continue to be useful in the future. 

Silverlight, in some sense, competes with Flash, since it is a web based vector graphics rendering framework.  It is more convenient than Flash, for many developers, since it can be programmed in .NET languages like C# and VB, rather than requiring a proprietary language like ActionScript.  Even better, it does something that Flash does not do easily.  Silverlight talks to data, and it does so without requiring an expensive server to make this possible. 

When you are thinking about Silverlight, then, it is appropriate to think of a business application with a Flash-like front-end.  This is what it promises, and the technology’s success will rise or fall on its ability to make this happen.

So if you believe in this promise with, say, 65% to 75% conviction, then you will want to learn Silverlight.  There are currently about 5 articles worth reading about it, and they can all be found here.  Most other tutorials you will find on the Internet simply deal with bits and pieces of this information, or else try to pull those bits and pieces together to write cool applications.

But after that, what?  The best thing to do is to start writing applications of your own.  No company is likely to give you a mandate to do this, so you will need to come up with your own project and start chipping away at it.  The easiest path is to do try to copy things that have gone before, but with Silverlight, to see if it can be done.  Many people are currently trying to write games that have already been written better in Flash.  This is a great exercise, and an excellent way to get to know the storyboard element in XAML.  It doesn’t really demonstrate any particular Silverlight capabilities, however.  It’s pretty much just an "I can do it, too" sort of exercise. 

A different route can be taken by rewriting an data aware application that is currently done in Winforms or ASP.NET AJAX, and seeing what happens when you do it in Silverlight instead.  Not as cool as writing games, of course, but it has a bigger wallop in the long run.  This will involve getting to know the various controls that are available for Silverlight and figuring out how to get data-binding working.  (Personally, I’m going to start playing with various interactive fiction frameworks and see how far I can get with that.  It’s a nice project for me in that it brings together both games programming (without fancy graphics) and data-aware applications.)

Finally, after getting through the various Microsoft materials and reading the various books from APress and Wrox and others that will come out shortly, where does one go to keep up with Silverlight techniques and best practices?

Adjectives for the Good and the Great

ernst

A well placed adjective can thoroughly change the meaning of a word.  Some adjectives are so powerful that the the combined phrase becomes more significant than the noun the adjective modifies, leaving the unmodified noun seeming naked and weak without it.  For instance, a cop is an important member of society, but a rogue cop is a thing of legend.  An identity is good to maintain, but a secret identity is essential to maintain.  A thief is a lowly member of society, but an identity thief is lower still.  A secret identity thief is the lowest of all.

Then there are adjectives so overwhelming that they obliterate the word they modify, leaving none of the original meaning behind.  A fallen angel, after all, is a devil, and a fair-weather friend is no friend at all.

I used to work for adjectives.  You might have seen me on the side of the road holding a cardboard sign with words to that effect.  I began my career as a junior developer, then worked up to being just a developer — which, although unmodified, was significant enough that it underscored the fact that "junior" was just a kind of slur.  After developer came advanced developer, then expert developer, and finally senior developer.  There are currently lots of senior developers around and very few junior developers, unlike the way it was back in the day.  They all tend to wonder what comes after the adjective "senior".  One can become an architect, of course, but the change of theme, and the fact that it is unmodified, merely serves to impress upon everyone that architects don’t actually do any coding.  As a sort of gesture to make up for this damning through faint praise, an architect will occasionally receive a hyphenated title of developer-architect, which to my ear just makes things worse.  After senior developer, one can also become a manager of course, much the same way a Jedi padwan can become a Sith lord, but this is a path of last resort.

Our Sith overlords could meliorate the situation by simply coming up with a new adjective, of course.  I always thought awesome developer had a nice ring to it.  Recent politics, besides revealing how our democracy really works, also inspired me with a different notion.  The term super delegate left me wondering if super wouldn’t make a good modifier for the great developer.  With repetition, we may be able to gentrify that somewhat wild modifier, super, and re-appropriate it from the comic connotations that have tended to diminish it.  What better public identity is there for an über geek than super developer?

This morning, however, I was surprised to discover that there is something even more powerful in Democratic electoral politics than the super delegate.  It is the undecided super delegate.  Amazing, isn’t it, that not doing something can make a person more powerful than actually doing something?  Rather than waste their potency by declaring for one candidate or the other, these undecided are able to curry special favor by simply not deciding, not declaring, not having an opinion one way or the other.

There is a tradition in the West that the undecided are in some sense the most contemptible beings, scorned by all sides.  Before the gates of hell, Dante and Virgil encounter the third host of angels who neither sided with God nor with Satan, as well as those "who lived without or praise or blame," and perpetually lament their state.  Virgil states harshly:

These of death
No hope may entertain: and their blind life
So meanly passes, that all other lots
They envy.  Fame of them the world hath none,
Nor suffers; mercy and justice scorn them both.
Speak not of them, but look, and pass them by.

In the late Platonic school of Athens — alternatively known as the old school of Skepticism, or Pyrrhonic Skepticism –, on the other hand, this suspension of affirmation was considered a moral virtue, and was called the epoche (a term later appropriated by Husserl for Phenomenology).  They found this suspension of belief so difficult, however, that they used ten argumentative tropes which they learned by heart to remind themselves that nothing should ever be asserted, lest they commit themselves to falsehood.  The true philosopher, for the Pyrrhonic skeptic, is not one who speaks the truth, but rather one who does not speak falsehoods.  Well into the modern era, one finds an echo of the Pyrrhonic tropes in Kant’s four antinomies.

Whether undecided super delegates are Pyrrhonists or Kantians I cannot say.  I choose to withhold judgment on the matter since, after all, the real intent of this post is simply to congratulate two of my colleagues in the Magenic Atlanta office on their promotions.  Through hard work and natural talent, Todd LeLoup and Douglas Marsh are now both Senior Consultants.  With such adjectives we give public praise to the good and the great among us.

Learning Silverlight: Day Six

battleships

I’ve spent today going through all of the Hands-On-Labs provided on the silverlight.net site.  The five labs are basically word documents, with some accompanying resources, covering various aspects of Silverlight 2 development.  More importantly, they are extremely well written, and serve as the missing book for learning Silverlight.  Should anyone ask you for a tech book recommendation for learning Silverlight 2 beta 1, you should definitely, emphatically, point them to these labs.  They are both comprehensive and lucid.  These labs are supposed to take an hour to an hour and a half each, so all tolled, it constitutes approximately seven hours of work.

 

1. Silverlight Fundamentals is a great overview of the features of Silverlight and how everything fits together.  Even though it covers a lot of territory you may have come across in other material, it does it in a streamlined manner.  In reading it, I was able to make a lot of connections that hadn’t occurred to me before.  It is your basic introductory chapter.

2. Silverlight Networking and Data demonstrates various ways to get a Silverlight application to communicate with resources outside of itself using the WebClient and WebRequest classes.  I couldn’t get the WebRequest project to work, but this may very well be my fault rather than the fault of the lab author.  The lab also includes samples of connecting to RSS feeds, working with WCF and, interestingly, one exercise involving ADO.NET Data Services, a feature of the ASP.NET Extensions Preview.

3. Building Reusable Controls in Silverlight provides the best walkthrough I’ve seen not only of working with Silverlight User Controls but also with working between a Silverlight project and Microsoft Blend.  This is also the only place I’ve found that gives the very helpful tidbit of information concerning adding a namespace declaration for the current namespace in your XAML page.  I’m not sure why we have to do this, since in C# and VB a class is always aware of its namespace, and the XAML page is really just a partial class after compilation, after all — but there you are.

4. Silverlight and the Web Browser surprised me.  In principle, I want to do everything in a Silverlight app using only compiled code, but the designers of Silverlight left open many openings for using HTML and JavaScript to get around any possible Silverlight limitations.  This lab made me start thinking that all the time I have spent over the past two years on ASP.NET AJAX may not have been a complete waste after all.  A word of warning, though.  The last three parts of this lab instruct the user to open projects included with the lab as if the user will have the chance to complete the lab using them.  It turns out that these projects include the completed versions of the lab exercises rather than the starting versions, so you don’t actually get a chance to work through these particular "hands-on" labs.  On the other hand, this is the first substantial mistake I’ve found in the labs.  Not bad.

5. Silverlight and Dynamic Animations begins with "This is a simple lawn mowing simulation…."  The sentence brims with dramatic potential.  Unlike the previous lab, the resources in Silverlight and Dynamic Animations include both a "before" and an "after" project, and it basically walks the user through using a "storyboard" to create an animation — and potentially a game.  It’s Silverlight chic.

In retrospect, if I had to choose only one resource from which to learn Silverlight 2, it would be these labs.  They’re clear, they’re complete, and best of all they’re free.