What is Service Oriented Architecture?

la_condition_humaine

What is SOA?  It is currently the hottest thing going on in corporate technology, and promises to simultaneously integrate disparate applications on multiple platforms as well as provide code reuse to all of those platforms.  According to Juval Lowy, it is the culmination of a 20 year project to enable true component-based design — in other words, the fulfillment of COM, rather than merely its replacement.  Others see it as a threat to object oriented programming. According to yet others, it is simply the wave of the future.  Rocky Lhotka recently remarked at a users-group meeting that it reminds him of mainframe programming.  In Windows Communication Foundation Unleashed, the authors write somewhat uncharitably:

"Thomas Erl, for instance, published two vast books ostensibly on the subject, but never managed to provide a noncircuitous definition of the approach in either of them."

This diversity of opinion, I believe, gives me an opening to offer my own definition of SOA.  SOA is, put simply, the triumph of the Facade pattern.

In the 90’s, Erich Gamma, Ralph Johnson, John Vlissides and Richard Helm popularized the notion of the 23 fundamental design patterns of object oriented programming.  I’ve often wondered why they came up with 23 patterns.  Some, such as the Flyweight pattern, are simply never used.  At the same time, one of the most popular patterns, MVP, doesn’t even make the canonical list.  How did they come up with 23?

Here’s an article on the significance of the number 23 which may or may not shed light on the Gang of Four’s motivation.  In Peter Greenaway’s A Zed and Two Noughts, the characters become obsessed with the number 23, and claim that there are 23 letters in the Greek alphabet and that Vermeer created 23 paintings (both false, by the way).  Perhaps the Gang of Four are Discordians — Discordians are fascinated by what they call the 23 Enigma.

In any case, they came up with 23 canonical (or "fundamental" or "classic") design patterns, and in the past decade, knowing these patterns has become the unofficial dividing line between the common run of code monkeys (I use the term affectionately) and so-called "true" developers — the initiation rite that turns boy programmers into men.  Anyone in development who wants to be anybody makes the attempt to learn them, but for whatever reason, the 23 patterns resist the attempt — sometimes because it is difficult to see how you would ever actually use them.  It helps, however, to remember that the StringBuilder type in C# is based on the Builder pattern, and that the Clone method on most types implements the Prototype pattern.  Delegates are built around the Observer pattern and collections are built around the Iterator pattern — but since these are both basically part of the C# language, among others, you don’t really need to learn them anymore.  In my opinion, the most useful patterns are the Template and the Factory Method.  The Singleton pattern, on the other hand, starts off seeming like a useful pattern but turns out not to be — a bit like a bad joke one eventually tires of.  It is, however, easy to remember, if somewhat tricky to implement.

The one pattern no one ever fails to remember is the Facade pattern.  It doesn’t do anything clever with abstract base classes or interfaces.  It doesn’t have tricky implementation details.  It simply takes the principle of encapsulation and goes crazy with it. Whatever complicated code you have, you place it behind a wall of code, called the Facade, which provides methods to manipulate your "real" code.  It’s the sort of pattern which, like Monsieur Jordan, once you find out about it you realize you’ve been doing it all your life.  The simplicity and ubiquity of the Facade makes it an unattractive pattern — it takes no programming acumen to learn it; it requires great effort to avoid it. It is the dumbest of the 23 canonical design patterns.

And Service Oriented Architecture is all built around it.  In some sense, SOA marks the democratization of architecture.  There are still tricks to planning a good SOA, and securing it may require some sophistication — but with SOA, anyone can be an architect.  Well … anyone who can build a Facade.

Agile Methodology and Promiscuity

lacan

The company I am currently consulting with uses Scrum, a kind of Agile methodology.  I like it.  Its main features are index cards taped to a wall and quick "sprints", or development cycles.  Scrum’s most peculiar feature is the notion of a "Scrum Master", which makes me feel dirty whenever I think of it.  It’s so much a part of the methodology, however, that you can even become certified as a "Scrum Master", and people will put it on their business cards.  Besides Scrum, other Agile methodologies include Extreme Programming (XP) and the Rational Unified Process (RUP) which is actually more of a marketing campaign than an actual methodology — but of course you should never ever say that to a RUP practitioner.

The main thing that seems to unify these Agile methodologies is the fact that they are not Waterfall.  And because Waterfall is notoriously unsuccessful, except when it is successful, Agile projects are generally considered to be successful, except when they aren’t.  And when they aren’t, there are generally two explanations that can be given for the lack of success.  First, the flavor of Agile being practiced wasn’t practiced correctly.  Second, the agile methodology was followed too slavishly, when at the heart of agile is the notion that it must be adapted to the particular qualities of a particular project.

In a recent morning stand up (yet another Scrum feature) the question was raised about whether we were following Scrum properly, since it appeared to some that we were introducing XP elements into our project management.  Even before I had a chance to think about it, I found myself appealing to the second explanation of Agile and arguing that it was a danger to apply Scrum slavishly.  Instead, we needed to mix and match to find the right methodology for us.

A sense of shame washed over me even as I said it, as if I were committing some fundamental category mistake.  However, my remarks were accepted as sensible and we moved on.

For days afterward, I obsessed about the cause of my sense of shame.  I finally worked it up to a fairly thorough theory.  I decided that it was rooted in my undergraduate education and the study of Descartes, who claimed that just as a city designed by one man is eminently more rational than one built through aggregation over ages, so the following of a single method, whether right or wrong, will lead to more valid results than philosophizing willy-nilly ever will.  I also thought of how Kant always filled me with a sense of contentment, whereas Hegel, who famously said against Kant that whenever we attempt to draw lines we always find ourselves crossing over them, always left me feeling uneasy and disoriented.  Along with this was the inappropriate (philosophically speaking) recollection that Kant died a virgin, whereas Hegel’s personal life was marked by drunkenness and carousing.  Finally I thought of Nietzsche, whom Habermas characterized as one of the "dark" philosophers for, among other things, insisting that one set of values were as good as another and, even worse, arguing in The Genealogy of Morals that what we consider to be noble in ourselves is in fact base, and what we consider moral weakness is in fact spiritual strength — a transvaluation of all values.  Nietzsche not only crossed the lines, but so thoroughly blurred them that we are still trying to recover them after almost a century and a half.

But lines are important to software developers — we who obsess about interfaces and abhor namespace collisions the way Aristotle claimed nature abhors a vacuum — as if there were nothing worse than the same word meaning two different things.  We are also obsessed with avoiding duplication of code — as if the only thing worse than the same word meaning two different things is the same thing being represented by two different words.  What a reactionary, prescriptivist, neurotic bunch we all are.

This seemed to explain it for me.  I’ve been trained to revere the definition, and to form fine demarcations in my mind.  What could be more horrible, then, than to casually introduce the notion that not only can one methodology be exchanged for another, but that they can be mixed and matched as one sees fit.  Like wearing a brown belt with black shoes, this fundamentally goes against everything thing I’ve been taught to believe not only about software, but also about the world.  If we allow this one thing, it’s a slippery slope to Armageddon and the complete dissolution of civil society.

Then I recalled Slavoj Zizek’s introduction to one of his books about Jacques Lacan (pictured above), and a slightly different sense of discomfort overcame me.  I quote it in part:

I have always found extremely repulsive the common practice of sharing the main dishes in a Chinese restaurant.  So when, recently, I gave expression to this repulsion and insisted on finishing my plate alone, I became the victim of an ironic "wild psychoanalysis" on the part of my table neighbor: is not this repulsion of mine, this resistance to sharing a meal, a symbolic form of the fear of sharing a partner, i.e., of sexual promiscuity?  The first answer that came to my mind, of course, was a variation on de Quincey’s caution against the "art of murder" — the true horror is not sexual promiscuity but sharing a Chinese dish: "How many people have entered the way of perdition with some innocent gangbang, which at the time was of no great importance to them, and ended by sharing the main dishes in a Chinese restaurant!"

The Aesthetics and Kinaesthetics of Drumming

 sheetmusic

Kant’s Critique of Judgment, also know as the Third Critique since it follows the first on Reason and the second on Morals, is a masterpiece in the philosophy of aesthetics.  With careful reasoning, Kant examines the experience of aesthetic wonder, The Sublime, and attempts to relate it to the careful delineations he has made in his previous works between the phenomenal and noumenal realms.  He appears to allow in the Third Critique what he denies us in the First: a way to go beyond mere experience in order to perceive a purpose in the world.  Along the way, he passes judgment on things like beauty and genius that left an indelible mark on the Romanticism of the 19th century.

Taste, like the power of judgment in general, consists in disciplining (or training) genius.  It severely clips its wings, and makes it civilized, or polished; but at the same time it gives it guidance as to how far and over what it may spread while still remaining purposive.  It introduces clarity and order into a wealth of thought, and hence makes the ideas durable, fit for approval that is both lasting and universal, and hence fit for being followed by others…

Kant goes on to say that where taste and genius conflict, a sacrifice needs be made on the side of genius.

in his First Critique, Kant discusses the "scandal of philosophy" — that after thousands of years philosophers still cannot prove what every simple person knows — that the external world is real.  There are other scandals, too, of course.  There are many questions which, after thousands of years, philosophers continue to argue over and, ergo, for which they have no definitive answers.  There are also the small scandals which give an aspiring philosophy student pause, and make him wonder if the philosophizing discipline isn’t a fraud and a sham after all, such as Martin Heidegger’s Nazi affiliation.  Here the question isn’t why he didn’t realize what every simple German should have known, since even the simple Germans were quite taken up with the movement.  What leaves a bad taste, however, is the sense that a great philosopher should have known better.

A minor scandal concerns Immanuel Kant’s infamous lack of taste.  When it came to music, he seems to have a particular fondness for martial music, das heist, marching bands with lots of drumming and brass.  He discouraged his students from learning to actually play music because he felt it was too time consuming.   We might say that in his personal life, when his taste and his genius came into conflict, Kant chose to sacrifice his taste.

I think I will, also.  In Rock Band, the drums are notoriously the most difficult instrument to play well.  It is also the faux instrument that most resembles the real thing, and it is claimed by some that if you become a good virtual drummer, you will also in the process become a good real drummer.  I’ve tried it but I can’t get beyond the Intermediate level.  I can sing and play guitar on hard, but the drums have a sublime complexity that exceed my abilities to cope.  With uncanny timing Wired magazine has come out with a walkthrough for the drums in Rock Band (h/t to lifehacker.com).  It mostly concerns working with the kick pedal and two alternative techniques, heel-up and heel-down (wax-on/wax-off?) for working with it.  It involves a bit of geometry and a lot of implicit physics.  I would have liked a little more help with figuring out the various rhythm techniques, but according to wired, I would get the best results by simply learning real drum techniques, either with an instructor or through YouTube. 

I wonder what Kant would say about that.

A Sequel to Wagner’s "Effective C#" in the works

indiana-jones-fedora

Can a sequel be better than the original?  With movies this is usually not the case, though we are all holding our breaths for the new installment in the  Indiana Jones franchise.  Technical books, however, are a different matter.  They have to be updated on a regular basis because the technology changes so rapidly.  My bookshelf is full of titles like Learning JAVA 1.3  and Professional Active Server Pages 2.0 which, to be frank, are currently useless.  Worse, they are heavy and take up a lot of room.  I’ve tried to throw them away, but the trash service refuses to take them due to environmental concerns, and there isn’t a technical books collection center in my area.  In Indiana Jones and the Last Crusade (made before the word "Crusade" got a bad rap) there is a comic scene of a book burning in Berlin, and though I am not in favor of book burnings in general — you’d think we would have learned our lesson after the Library of Alexandria burned down — still, occasionally, I dream of building a bonfire around COM Programming for Dummies and its ilk.

Scott Hanselman recently posted asking about the great technical books of the past ten years, and one of the titles that came up repeatedly is Bill Wagner’s Effective C#: 50 Specific Ways to Improve Your C#.  The book is great for .NET programmers because it goes beyond simply explaining how to write Hello, world! programs, but instead tries to show how one can become a better developer.  The conceit of the book is simple.  For each of his 50 topics, he explains that there are at least two ways to accomplish a given task, and then explains why you should prefer one way to the other.  In the process of going through five or six of these topics, the reader comes to realize that what Bill Wagner is actually doing is explaining what makes for good code, and when both paths are equally good,what makes for elegant code.  This helps the reader to form a certain habit of thinking concerning his own code.  The novice programmer is constantly worried about finding the right way to write code.  The experienced programmer already knows the various right ways to do a given task, and becomes preoccupied with finding the better way.

The way I formulated that last thought is a bit awkward.  I think I could have written it better.  A semicolon is probably in order, and the sentences should be shorter.  Perhaps

The novice programmer is preoccupied with finding the right way to perform a task; the experienced programmer knows that there are various right ways, and is more concerned with finding the most elegant way.

or maybe

The novice is preoccupied with finding the right way to get something done; the expert is aware that in programming there are always many paths, and his objective is to find the most elegant one.

Alas I am no Le Rochefoucauld, but you get the idea.  This is something that prose writers have always considered a part of their craft.  Raymond Queneau once wrote an amazing book that simply takes the same scene on a bus and reformulates it some fifty times.  Perhaps Amazon can pair up Bill Wagner’s Effective C# with Queneau’s Exercises in Style in one of their "…or buy both for only…" deals, since they effectively reinforce the same point in two different genres, to wit: there is no best way to write, but there is always a better way.

If you do get on a Queneau kick, moreover, then I highly recommend this book, a pulp novel about Irish terrorists, which has a remarkably un-PC title, and for which reason I am not printing it here.  I assure you, the contents are better than the title.

The only shortcoming of Bill Wagner’s book is that it was written for C# 1.0, while we are currently at iteration 3.0.  It is still a remarkably useful book that has aged well — but alas, it has aged.  It was with great excitement, then, that I read on Bill’s blog that he is currently working on a title called More Effective C# available for pre-order on Amazon and as a Rough Cut on SafariBooksOnline

The current coy subtitle is (#TBD) Specific Ways to Improve Your C#. To fulfill the promise implicit in the book’s title, More Effective C#, doesn’t the final #TBD number of Specific Ways have to be at least 51?

Using statements with unknown types

I recently came across some interesting code in Juval Löwy’s Programming WCF Services and wanted to share.  It’s simply something I had never run across before:

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

The first thing to notice is that the proxy object is instantiated outside of the using block.  I don’t think I’ve ever actually tried this, but it is perfectly permissible (if not recommended).  I used a dissembler to look at the IL this generates, and it is pretty much the same as instantiating the proxy object inside of the using brackets.  The main difference is that in this case, the scope of the proxy object extends beyond the using block.

Within the using brackets, this code casts the proxy object to the IDisposable interface so the Dispose method will be available.  Since a Using Block is basically syntactic sugar for a try-catch-finally structure that calls an object’s Dispose method in the finally block, the equivalent try-catch-finally block would look like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                ((IDisposable)proxy).Dispose();

            }

 

However, Juval’s using statement does one additional thing.  It also checks to see if the proxy object even implements the IDisposable interface.  If it does, then the Dispose method is called on it.  If it does not, then nothing happens in the finally block.  The equivalent full blown code, then, would actually look something like this:

 

            IMyContract proxy = new MyContractClient();

            try

            {

                proxy.MyMethod();

            }

            finally

            {

                IDisposable disposable = proxy as IDisposable;

                if (disposable != null)

                {

                    disposable.Dispose();

                }

 

            }

 

… and we’ve condensed it to this …

 

            IMyContract proxy = new MyContractClient();

            using (proxy as IDisposable)

            {

                proxy.MyMethod();

            }

 

It’s probably not something that will come up too often, but if you have a situation in which you do not know whether an object implements IDisposable or not, but still want to implement a using block for readability and good coding practice, this is how you would go about doing it.  

Besides Juval’s proxy example, I can imagine it coming in handy when dealing with collections in which you don’t necessarily know whether all of the members of the collection implement IDisposable, for instance:

 

            foreach(IDog dog in myDogsCollection)

            {

                using (dog as IDisposable)

                {

                    dog.Bark();

                }

            }

 

It also just looks really cool.  h/t to Bill Ryan for pointing this out to me.

Learning Silverlight: Day Seven

watson-crick-dna

The seventh day of any project should always be devoted to rest and reflection.

There is a passage from James Watson’s account about discovering the structure of DNA, The Double Helix, in which Watson decides that he wants to study X-ray crystallography (the technique that eventually leads to the discovery of the double helix structure).  He is told by an authority that because the field is so new, there are only five papers on the subject worth reading — five papers to read in order to master an entire field of science!

This is also the current state of Silverlight development.  The final framework is not yet out, the books are still being written as authors hedge their bets on what will be included in the RTM of Silverlight 2.0, and there are no best practices.  Moreover, Silverlight development is so different from what has gone before that no one has a particular leg up on anyone else.  The seasoned ASP.NET developer and the kid fresh out of college are in the same position: either one can become a master of Silverlight, or simply let it slide by and work on other things instead.

So is Silverlight worth learning?  It basically fills in two pre-existing development domains.  One is Flash, and the other is ajax web sites.  It can improve on ajax web sites by simply having a simpler programming model, and by not being dependent on Javascript (as of the 2.0 beta realease) which tends to be brittle.  The update panel in ASP.NET Ajax and the Ajax Control Toolkit have made component based programming of ajax web sites easier, but if you ever read the Microsoft Ajax web forums, you’ll quickly see that it still isn’t easy enough, and people supporting sites that have been up for a year are starting to come forward with their maintenance nightmares.  The introduction of the Microsoft MVC framework raises further questions about whether the webform techniques that so many Microsoft-centric web developers have been working with will continue to be useful in the future. 

Silverlight, in some sense, competes with Flash, since it is a web based vector graphics rendering framework.  It is more convenient than Flash, for many developers, since it can be programmed in .NET languages like C# and VB, rather than requiring a proprietary language like ActionScript.  Even better, it does something that Flash does not do easily.  Silverlight talks to data, and it does so without requiring an expensive server to make this possible. 

When you are thinking about Silverlight, then, it is appropriate to think of a business application with a Flash-like front-end.  This is what it promises, and the technology’s success will rise or fall on its ability to make this happen.

So if you believe in this promise with, say, 65% to 75% conviction, then you will want to learn Silverlight.  There are currently about 5 articles worth reading about it, and they can all be found here.  Most other tutorials you will find on the Internet simply deal with bits and pieces of this information, or else try to pull those bits and pieces together to write cool applications.

But after that, what?  The best thing to do is to start writing applications of your own.  No company is likely to give you a mandate to do this, so you will need to come up with your own project and start chipping away at it.  The easiest path is to do try to copy things that have gone before, but with Silverlight, to see if it can be done.  Many people are currently trying to write games that have already been written better in Flash.  This is a great exercise, and an excellent way to get to know the storyboard element in XAML.  It doesn’t really demonstrate any particular Silverlight capabilities, however.  It’s pretty much just an "I can do it, too" sort of exercise. 

A different route can be taken by rewriting an data aware application that is currently done in Winforms or ASP.NET AJAX, and seeing what happens when you do it in Silverlight instead.  Not as cool as writing games, of course, but it has a bigger wallop in the long run.  This will involve getting to know the various controls that are available for Silverlight and figuring out how to get data-binding working.  (Personally, I’m going to start playing with various interactive fiction frameworks and see how far I can get with that.  It’s a nice project for me in that it brings together both games programming (without fancy graphics) and data-aware applications.)

Finally, after getting through the various Microsoft materials and reading the various books from APress and Wrox and others that will come out shortly, where does one go to keep up with Silverlight techniques and best practices?

Adjectives for the Good and the Great

ernst

A well placed adjective can thoroughly change the meaning of a word.  Some adjectives are so powerful that the the combined phrase becomes more significant than the noun the adjective modifies, leaving the unmodified noun seeming naked and weak without it.  For instance, a cop is an important member of society, but a rogue cop is a thing of legend.  An identity is good to maintain, but a secret identity is essential to maintain.  A thief is a lowly member of society, but an identity thief is lower still.  A secret identity thief is the lowest of all.

Then there are adjectives so overwhelming that they obliterate the word they modify, leaving none of the original meaning behind.  A fallen angel, after all, is a devil, and a fair-weather friend is no friend at all.

I used to work for adjectives.  You might have seen me on the side of the road holding a cardboard sign with words to that effect.  I began my career as a junior developer, then worked up to being just a developer — which, although unmodified, was significant enough that it underscored the fact that "junior" was just a kind of slur.  After developer came advanced developer, then expert developer, and finally senior developer.  There are currently lots of senior developers around and very few junior developers, unlike the way it was back in the day.  They all tend to wonder what comes after the adjective "senior".  One can become an architect, of course, but the change of theme, and the fact that it is unmodified, merely serves to impress upon everyone that architects don’t actually do any coding.  As a sort of gesture to make up for this damning through faint praise, an architect will occasionally receive a hyphenated title of developer-architect, which to my ear just makes things worse.  After senior developer, one can also become a manager of course, much the same way a Jedi padwan can become a Sith lord, but this is a path of last resort.

Our Sith overlords could meliorate the situation by simply coming up with a new adjective, of course.  I always thought awesome developer had a nice ring to it.  Recent politics, besides revealing how our democracy really works, also inspired me with a different notion.  The term super delegate left me wondering if super wouldn’t make a good modifier for the great developer.  With repetition, we may be able to gentrify that somewhat wild modifier, super, and re-appropriate it from the comic connotations that have tended to diminish it.  What better public identity is there for an über geek than super developer?

This morning, however, I was surprised to discover that there is something even more powerful in Democratic electoral politics than the super delegate.  It is the undecided super delegate.  Amazing, isn’t it, that not doing something can make a person more powerful than actually doing something?  Rather than waste their potency by declaring for one candidate or the other, these undecided are able to curry special favor by simply not deciding, not declaring, not having an opinion one way or the other.

There is a tradition in the West that the undecided are in some sense the most contemptible beings, scorned by all sides.  Before the gates of hell, Dante and Virgil encounter the third host of angels who neither sided with God nor with Satan, as well as those "who lived without or praise or blame," and perpetually lament their state.  Virgil states harshly:

These of death
No hope may entertain: and their blind life
So meanly passes, that all other lots
They envy.  Fame of them the world hath none,
Nor suffers; mercy and justice scorn them both.
Speak not of them, but look, and pass them by.

In the late Platonic school of Athens — alternatively known as the old school of Skepticism, or Pyrrhonic Skepticism –, on the other hand, this suspension of affirmation was considered a moral virtue, and was called the epoche (a term later appropriated by Husserl for Phenomenology).  They found this suspension of belief so difficult, however, that they used ten argumentative tropes which they learned by heart to remind themselves that nothing should ever be asserted, lest they commit themselves to falsehood.  The true philosopher, for the Pyrrhonic skeptic, is not one who speaks the truth, but rather one who does not speak falsehoods.  Well into the modern era, one finds an echo of the Pyrrhonic tropes in Kant’s four antinomies.

Whether undecided super delegates are Pyrrhonists or Kantians I cannot say.  I choose to withhold judgment on the matter since, after all, the real intent of this post is simply to congratulate two of my colleagues in the Magenic Atlanta office on their promotions.  Through hard work and natural talent, Todd LeLoup and Douglas Marsh are now both Senior Consultants.  With such adjectives we give public praise to the good and the great among us.

Learning Silverlight: Day Six

battleships

I’ve spent today going through all of the Hands-On-Labs provided on the silverlight.net site.  The five labs are basically word documents, with some accompanying resources, covering various aspects of Silverlight 2 development.  More importantly, they are extremely well written, and serve as the missing book for learning Silverlight.  Should anyone ask you for a tech book recommendation for learning Silverlight 2 beta 1, you should definitely, emphatically, point them to these labs.  They are both comprehensive and lucid.  These labs are supposed to take an hour to an hour and a half each, so all tolled, it constitutes approximately seven hours of work.

 

1. Silverlight Fundamentals is a great overview of the features of Silverlight and how everything fits together.  Even though it covers a lot of territory you may have come across in other material, it does it in a streamlined manner.  In reading it, I was able to make a lot of connections that hadn’t occurred to me before.  It is your basic introductory chapter.

2. Silverlight Networking and Data demonstrates various ways to get a Silverlight application to communicate with resources outside of itself using the WebClient and WebRequest classes.  I couldn’t get the WebRequest project to work, but this may very well be my fault rather than the fault of the lab author.  The lab also includes samples of connecting to RSS feeds, working with WCF and, interestingly, one exercise involving ADO.NET Data Services, a feature of the ASP.NET Extensions Preview.

3. Building Reusable Controls in Silverlight provides the best walkthrough I’ve seen not only of working with Silverlight User Controls but also with working between a Silverlight project and Microsoft Blend.  This is also the only place I’ve found that gives the very helpful tidbit of information concerning adding a namespace declaration for the current namespace in your XAML page.  I’m not sure why we have to do this, since in C# and VB a class is always aware of its namespace, and the XAML page is really just a partial class after compilation, after all — but there you are.

4. Silverlight and the Web Browser surprised me.  In principle, I want to do everything in a Silverlight app using only compiled code, but the designers of Silverlight left open many openings for using HTML and JavaScript to get around any possible Silverlight limitations.  This lab made me start thinking that all the time I have spent over the past two years on ASP.NET AJAX may not have been a complete waste after all.  A word of warning, though.  The last three parts of this lab instruct the user to open projects included with the lab as if the user will have the chance to complete the lab using them.  It turns out that these projects include the completed versions of the lab exercises rather than the starting versions, so you don’t actually get a chance to work through these particular "hands-on" labs.  On the other hand, this is the first substantial mistake I’ve found in the labs.  Not bad.

5. Silverlight and Dynamic Animations begins with "This is a simple lawn mowing simulation…."  The sentence brims with dramatic potential.  Unlike the previous lab, the resources in Silverlight and Dynamic Animations include both a "before" and an "after" project, and it basically walks the user through using a "storyboard" to create an animation — and potentially a game.  It’s Silverlight chic.

In retrospect, if I had to choose only one resource from which to learn Silverlight 2, it would be these labs.  They’re clear, they’re complete, and best of all they’re free.

Popfly: Silverlight Game Designer

enterprise

Have you always wanted to create a Flash game but found it too daunting?  Microsoft’s Popfly site now lets you create Silverlight games using templates and pre-created images following a sort of wizard approach to game design.  There are some working games on the site that you can start from, including a racing game and an Asteroids type game, and then modify with new behaviors or new animations to get the game you want.

The Popfly game creator is currently in alpha, and the intent is to make it easy enough for a child to use.  To tell the truth, though, I’ve spent about 20 minutes on the site so far and still can’t quite figure out what is going on.  All the same, it’s a great idea, and may be the sort of gateway tool that leads kids to want to use Visual Studio and Blend to create more sophisticated games until they are eventually fully hooked developers — if that is the sort of life you want for your children, of course.

 

***

 

Update 5/3/08: after three hours on Popfly yesterday afternoon, my seven year old son has created his own Silverlight game, while my daughter is learning the intricacies of Microsoft Blend 2.5 in order to create her own animations (the Popfly XAML editor is mostly disabled in this alpha version).  I’m proud of their appetite for learning, but am nonplussed at the prospect for their future careers.  I’d prefer that they become doctors or lawyers — or even philosophy professors — someday, rather than software developers like their old man.  Then again, maybe in the world of tomorrow everyone will know how to work with Visual Studio 2020 and there will be no need for people like me who specialize in software programming.  Software development might become an ancillary skill, like typing, which everyone is simply expected to know.  Or maybe our computers will have learned to program themselves by that time.

It is an internal peculiarity of the software development professional that he constantly demands new tools and frameworks that will make programming simpler — the simpler the better — which in a sense is a demand to make his own skills obsolete.  His consolation is found in Fred Brooks’s essay No Silver Bullet, which promises that there are essential complexities in software development that cannot be solved through tools or processes.  The software developer is consequently an inherently conservative person who not only recognizes but also depends on the essential frailty of humanity and our inability to perfect ourselves.  But what if this isn’t true?  What if Brooks is wrong, and the tools eventually become simple enough that even a seven year old is able to build reliable business applications?  That will be a bright day for the middle-manager who has to constantly deal with developers who want to tell him what cannot be done, thus limiting his personal and career potential.  It might also be a beautiful day for humanity in general, but a dark, dark day for the professional software developer who will go the way of the powder monkey, the nomenclator, the ornatrix, the armpit plucker of ancient Rome, and of course the dodo.