Promiscuity and Software: The Thirty-one Percent Solution

 kiss

There is one big player in the software development world, and her name is Microsoft. Over the years many vendors and startups have attempted to compete against the lumbering giant, and Microsoft has typically resorted to one of two methods for dealing with her rivals. Either she pours near-unlimited money into beating the competition, as Microsoft did with Netscape in the 90’s, or she buys her rival right out. It is the typical build or buy scenario. But with the ALT.NET world, she seems to be taking a third approach.

The premise of the ALT.NET philosophy is that developers who work within the Microsoft .NET domain should still be free to use non-Microsoft sanctioned technologies, and there is even a certain Rome versus the barbarians approach to this, with Microsoft naturally taking the part of the Arian invaders. The best solution to a technical problem it is claimed (and rightly so) need not be one provided by Microsoft, which ultimately is a very small sub-set of the aggregate body of developers in the world. Instead, solutions should be driven by the developer community who know much more about the daily problems encountered by businesses than Microsoft does. Microsoft in turn may or may not provide the best tools to implement these solutions; when she doesn’t, the developer community may come up with their own, such as NHibernate, NUnit, Ajax, Windsor and RhinoMocks (all free, by the way).

What is interesting about each of these tools is that, when they came out, Microsoft didn’t actually have a competing tool for any of these technologies. Instead of competing with Microsoft on her own field, the ALT.NET community began by competing with Microsoft in the places where she had no foothold. Slowly, however, Microsoft came out with competing products for each of these but the last. MSUnit was released about three years ago to compete with NUnit. ASP.NET AJAX (formerly ATLAS, a much cooler name) competes with the various Ajax scripting libraries. ASP.NET MVC competes with the PHP development world. Entity Framework and the Unity Framework were recently released to compete with NHibernate and Windsor, respectively.

Unlike the case with the browser wars of the 90’s, Microsoft’s offerings are not overwhelmingly better. The reception of the Entity Framework (mostly orchestrated by the ALT.NET community itself, it should be admitted) was an extreme case in point, for scores of developers including a few MVP’s (Microsoft’s designation for recognized software community leaders) publicly pilloried the technology in an open letter and petition decrying its shortcomings.

Microsoft, in these cases, is not trying to overwhelm the competition. She does not throw unlimited resources at the problem. Instead, she has been throwing limited resources at each of these domains and, in a sense, has accomplished what the ALT.NET world originally claimed was their goal: to introduce a bit a of competition into the process and allow developers to select the most fitting solution.

Not too long ago I came across an article that suggested to me a less benign strategy on Microsoft’s part, one that involves ideological purity and software promiscuity. The ALT.NET world, one might be tempted to say, has a bit of a religious aspect to it, and the various discussion board flames concerning ALT.NET that pop up every so often have a distinct religious patina to them.

The relationship between ALT.NET-ers to Microsoft is a bit like the relationship of of Evangelicals and Fundamentalists to the world. We do, after all, have to live in this world, and we don’t have the ability or the influence at all times to shape it the way we want. Consequently, compromises must be made, and the only question worth asking is to what extent we must compromise. The distinction between Evangelicals and Fundamentalists rests squarely on this matter, with Evangelicals believing that some sort of co-existence can be accomplished, while Fundamentalists believe that the cognitive dissonance between their view of the world and the world’s view of itself are too great to be bridged. For Fundamentalists, the Evangelicals are simply fooling themselves, and worse opening themselves up to temptation without realizing it.

All this being background to Margaret Talbot’s article in the November New Yorker Red Sex Blue Sex: Why do so many evangelical teen-agers become pregnant?  Ms. Talbot raises the question of abstinence only programs which are widely ridiculed for being unsuccessful. 

“Nationwide, according to a 2001 estimate, some two and a half million people have taken a pledge to remain celibate until marriage. Usually, they do so under the auspices of movements such as True Love Waits or the Silver Ring Thing. Sometimes, they make their vows at big rallies featuring Christian pop stars and laser light shows, or at purity balls, where girls in frothy dresses exchange rings with their fathers, who vow to help them remain virgins until the day they marry. More than half of those who take such pledges—which, unlike abstinence-only classes in public schools, are explicitly Christian—end up having sex before marriage, and not usually with their future spouse.”

The programs are not totally unsuccessful.  In general pledgers delay sex eighteen months longer than non-pledgers.  The real indicator of the success of an abstinence only program, however, is how popular they become.  The success of an abstinence only program is ironically inversely proportional to its popularity and ubiquity.

“Bearman and Brückner have also identified a peculiar dilemma: in some schools, if too many teens pledge, the effort basically collapses. Pledgers apparently gather strength from the sense that they are an embattled minority; once their numbers exceed thirty per cent, and proclaimed chastity becomes the norm, that special identity is lost. With such a fragile formula, it’s hard to imagine how educators can ever get it right: once the self-proclaimed virgin clique hits the thirty-one-per-cent mark, suddenly it’s Sodom and Gomorrah.”

The ALT.NET chest of development tools is not widely used, although its proponents are very vocal about the need to use them.  Unit testing, which is a very good practice, has limited actual adherence though many developers will publicly avow its usefulness.  NHibernate, Windsor and related technologies have an even weaker hold on the mind share of the developer community — much less than the thirty percent, I would say — an actuality which belies the volume and vehemence, as well as exposure, of their proponents.

With the thirty-one percent solution, Microsoft does not have to improve on the ALT.NET technologies and methodologies in order to win.  All she has to do is to help the proponents of IOC, Mocking and ORMs to get to that thirty-one percent adoption level.  She can do this by releasing interesting variations of the ALT.NET community tools, thus gentrifying these tools for the wider Microsoft development community.  Even within the ALT.NET world, as in our world, there are more Evangelicals than Fundamentalists, people who are always willing to try something once.

Microsoft’s post-90’s strategy need no longer be build or buy.  She can take this third approach of simply introducing a bit of software promiscuity, a little temptation here, a little skin there, and pretty soon it’s a technical Sodom and Gomorrah.

Expertise and Authority

In my late teens, I went through a period of wanting to be a diplomat for the State Department.  The prospect of traveling, learning languages, and being an actor in world history appealed to me.  My father, a former case officer in Vietnam, recommended joining the CIA instead.  As he put it to me (and as old Company hands had put it to him), diplomats only ever think they know what is going on in a given country.  It is the spies that really know.

The knowledgeableness — and even competence — of intelligence agencies have been called into question over the past few years with the inability to track down bin Laden and, before that, the inability to accurately assess Iraq’s nuclear capabilities.  I was surprised to read recently in an article by John Le Carré for The New Yorker that, contrary to my father’s impression, this may have long been the case.

Discussing his time as an insider in British intelligence, Le Carré writes about his disappointment with the discrepancy between what he had imagined it to be and what it turned out to actually be.  In terms reminiscent of the longings of many career professionals, he describes “fantasizing about a real British secret service, somewhere else, that did everything right that we either did wrong or didn’t do at all.”

As an IT consultant I encounter many technical experts, and am a bit of one myself in some rather abstruse areas.  A common frustration among these experts is that expertise does not always grant them authority, as one would expect in a meritorious modern corporate society.  Instead, contrarily, they find that corporate authority tends to confer expertise.  The managerial classes inside the corporations we work with are able to dictate technical directions not because they know about these technologies, but rather simply because they have the authority to do so.

In part this is simply how the system works.  Expertise and authority go together, but not in the ways one would expect.  In the corporate world, authority granted through expertise in one area, say managerial or financial expertise and a track record of success, grants additional and possibly unjustified acknowledgment of expertise in unrelated fields.

Another reason, however, must be due to the incommunicability of IT expertise.  The field is complicated and its practitioners are not generally known for their communication abilities.  Whereas the spooks of the intelligence world are not allowed to communicate their detailed knowledge to the layman, the IT professional is simply unable to.  IT professionals speak “geek talk,” while business professionals speak corporate speak, and translators between these two dialects are few and far between.  Philosophically, however, such translations and transitions are possible, and the people who can do it make excellent careers for themselves.

What happens, however, when the whole notion of expertise is called into question.  As Stanley Rosen once said of Nietzsche, what happens when the esoteric becomes exoteric, and what we all know about our own failings and shortcomings as “experts” becomes public knowledge?

Such a thing seems to be happening now with the world economic crisis (I’m waiting for an expert to come along with a better moniker for this downward spiral we seem to all be going through, but for the moment “WEC” seems to be working).  The world economic crisis seems to have occurred because people who should have known better: bankers, traders, investors and economists, never put a stop to a problem with bad debt, bad credit and bubble markets of worldwide proportions.  As I understand it, all these people knew things weren’t kosher but were hoping to take advantage of market distortions to make huge profits before bailing out at the last moment, but like the unfortunate fellow who raced James Dean in Rebel Without a Cause, they all failed to jump when they were supposed to.

Yet they were the experts.  As back up we have men like Henry Paulson at the Treasury to fix these messes, and he started out sounding authoritative about what needed to be done.  We needed $700 billion to fix the situation or at least to make it not so bad and the government had a plan, we were told, to do so.  However, the plan has mutated and meandered to the point that it now looks like it is being made up as we go along.  This in itself may not be such a bad thing, but is this meandering the sort of thing experts are supposed to do?

Recently the heads of the automotive industry came to Washington to ask for bailout money and, as we now all know, they didn’t have a plan for how they planned to spend it.  Is that how experts act?

After the flood, the big discussion now seems to be whether we should try to preserve our laissez-faire system or try to improve it and correct it with more regulation.  The sages of Wall Street seem to actually like this solution, which is in itself an admission that they no longer see themselves as experts or, apparently, of even being capable of managing their own affairs.  They would prefer that another authority correct their own excesses for them, since they no longer trust themselves.

But if there are no experts any longer on Wall Street, where all they had to do was look after their own interests, can we really expect to find one in Washington that will look over all of our interests?  I don’t mean to be a knee-jerk conservative on this matter, but does it make sense that when our clever people make it clear that they are not so clever or competent after all, we must look for someone that much more clever than all of them put together to fix things?  Can that level of expertise even exist?

And so I find myself fantasizing about a different America, indeed a different world, in which they get everything right that we either do wrong or don’t do at all.

Presenting on WCF in October

I will be presenting at GGMUG, the Greater Gwinnett Microsoft User Group, on October 9th.  My topic is building N-tier applications using WCF (the announcement from GGMUG says I’ll be presenting on WPF, but I’m not going to let a consonant get in my way).

Of course all the buzz is around WCF and SOA architectures these days, but people actually still write traditionally architected applications, and WCF makes it soooo easy.

If you are in town and have a few hours to kill, please stop by.  The session starts at 6:30 at Gwinnett Technical College, with food and drinks provided by Magenic Technologies.

Waterfall and Polygamy

polygamy

Methodology is one of those IT topics that generally make my eyes glaze over.  There is currently a hefty thread over on the altdotnet community rehashing the old debates about waterfall vs. agile vs. particular flavors of agile. The topic follows this well-worn pattern: waterfall, which dominated the application development life cycle for so many years, simply didn’t work, so someone had to invent a lightweight methodology like XP to make up for its deficiencies.  But XP also didn’t always work, so it was necessary to come up with other alternatives, like Scrum, Rational, etc., all encapsulated under the rubric "Agile". (Which agile methodology came first is a sub-genre of the which agile methodology should I use super-genre, by the way.)  Both Waterfall and the various flavors of Agile are contrasted against the most common software development methodology, "Cowboy Coding" or "Seat-of-the-pants" programming, which is essentially a lack of structure.  Due to the current common wisdom regarding agile, that one should mix-and-match various agile methodologies until one finds a religion one can love, there is some concern that this is not actually all that distinguishable from cowboy coding. 

For an interesting take on the matter, you should consult Steve Yegge’s classic post, Good Agile, Bad Agile.

I have a friend who, in all other ways, is a well-grounded, rational human being in the engineering field, but when the topic of Druids comes up, almost always at his instigation, I feel compelled to find a reason to leave the room.  The elm, the ewe, the mistletoe, the silver sickle: these subjects in close constellation instill in me a sudden case of restless legs syndrome.

Not surprisingly, discussions concerning Methodology give me a similar tingly feeling in my toes.  This post in the altdotnet discussion caught my eye, however:

I also don’t believe it is possible to do the kind of planning waterfall requires on any sufficiently large project. So usually what happens is that changes are made to the plan along the way as assumptions and understandings are changed along the way.

Rather than belittle waterfall methodology as inherently misguided, the author expresses the novel notion that it is simply too difficult to implement.  The fault in other words, dear Brutus, lies not in our stars, but in ourselves.

Rabbi Gershom Ben Judah, also known as the Light of the Exile, besides being a formidable scholar, is also notable for his prohibition of polygamy in the 10th century, a prohibition that applied to all Ashkenazy jews, and which was later adopted by Sephardis as well. The prohibition required particular care, since tradition establishes that David, Solomon, and Abraham all had multiple wives.  So why should it be that what was it good for the goose is not so for the gander?

Rabbi Gershom’s exegesis in large part rests on this observation: we are not what our forefathers were.  David, Solomon, and Abraham were all great men, with the virtue required to maintain and manage  polygamous households.  However, as everyone knows, virtue tends to become diluted when it flows downhill.  The modern (even the 10th century modern) lacks the requisite wisdom to prevent natural jealousies between rival wives, the necessary stamina to care for all of his wives as they deserve, and the practical means to provide for them.  For the modern to attempt to live as did David, Solomon, or Abraham, would be disastrous personally, and inimical to good order generally.

What giants of virtue must have once walked the earth.  There was a time, it seems, when the various agile methodologies were non-existent and yet large software development projects were completed, all the same.  It is perhaps difficult for the modern software developer to even imagine such a thing, for in our benighted state, stories about waterfall methodology sending men to the moon seem fanciful and somewhat dodgy — something accomplished perhaps during the mythical man month, but not in real time.

Yet it is so.  Much of modern software is built on the accomplishments of people who had nothing more than the waterfall method to work with, and where we are successful, with XP or Scrum or whatever our particular religion happens to be, it is because we stand on the shoulders of giants.

I find that I am not tempted, all the same.  I know my personal shortcomings, and I would no more try to implement a waterfall project than I would petition for a second wife.  I am not the man my forefathers were.

Why is the timber crooked?

Isaiah Berlin

 

Song

Go and catch a falling star,
Get with child a mandrake root,
Tell me where all past years are,
Or who cleft the devil’s foot,
Teach me to hear mermaids singing,
Or to keep off envy’s stinging,
And find
What wind
Serves to advance an honest mind.

If thou be’st born to strange sights,
Things invisible to see,
Ride ten thousand days and nights,
Till age snow white hairs on thee,
Thou, when thou return’st, wilt tell me,
All strange wonders that befell thee,
And swear,
No where
Lives a woman true and fair.

If thou find’st one, let me know,
Such a pilgrimage were sweet;
Yet do not, I would not go,
Though at next door we might meet,
Though she were true, when you met her,
And last, till you write your letter,
Yet she
Will be
False, ere I come, to two, or three.

–John Donne

Why do software projects fail (II)?

steichen

Why do software projects fail? is not a philosophical question in Descartes’ sense, that is one that can be decomposed in order to arrive at a practical solution.  Rather, it is a contemplative problem in Aristotle’s sense: one that can be looked at in many ways in order to reveal something about ourselves.

 

Software projects fail because we are weak.  We can make software projects better not by becoming stronger, but by minding our own weaknesses.  The truth is that software programming has made a lot of progress over the past twenty years.  It has not made it along the via positiva, however, through solutions such as OO, or Design Patterns, or Agile, or SOA, or a host of other white knight solutions that over time have turned out mostly to be hype.

 

Rather it has progressed through the via negativa, or by learning what not to do.  The via negativa was originally a mode of theological discourse, mystical in essence, which strived to understand God not in terms of what He is, but in terms of what He is not.  This mystical path serves well in pragmatic matters, also, and below I will outline some principles of the via negativa as applied to software development. 

 

I should include the caveat that I am not sure these principles have ever worked well for me, and that I continue to underestimate the scope of a task and the amount of time it will take to finish it.  On the positive side, I believe it all could have all been worse.

 

The Via Negativa of Software Development

 

1. Programming is neither an Art nor a Science

The thing I like best about Hunt and Thomas’s book The Pragmatic Programmer is their attempt in the first chapter to analogize the act of programming to the Medieval journeyman tradition and the concept of craft.  I liked it so much, in fact, that I never bothered to read the rest of their book.  The point of thinking of programming as a craft is that we escape the trap of thinking of it in other ways.

 

Many programmers think of coding as an art, in the romantic sense.  Every project is new and requires a new solutions.  The solutions a programmer applies constitute a set of conceits that express the programmer’s own uniqueness.  Rather than applying mundane and known solutions to a set of programming problems, the artist coder prefers to come up with solutions that will express his originality.

 

On the other side, some programmers think of coding as a science.  There is a set of best ways to accomplish any given task and their role is to apply that perfect solution.

 

Between these two is the notion of programming as craft.  There are a set of ways of doing things, such as retrieving data from a database or displaying them for data entry.  The craftsman’s goal is to accomplish his tasks in the ways he knows how to do it, whether they are the best or not, with as little waste as possible.  The craftsman’s motto is also Voltaire’s: Le mieux est l’ennemi du bien.

 

2. Don’t Invent when you can Steal

A corollary to the craftsman’s motto is not to invent what you can steal.  One of the biggest mistakes that developers make is to assume that their problems are unique to them.  In fact, most problems in programming have been faced before, and developers are unusually generous about publishing their solutions on the Internet for others to use.  Good design should always involve, early in the process, using Google to find how others have solved your problems.  This is somewhat different from the Patterns frame-of-mind which assumes that there are common theoretical solutions to recurring problems.  What you really want to find are common implementations, specific to the language and platform you are using, to your programming needs.

 

This was a well-known and commonly applied rule in the early days of radio.  Comics would freely steal jokes from one another without attribution, and occasionally would complain about how many of their jokes were being used on other stations.  What the radio comics knew, and we need to relearn, is that the success of the punchline depends not on the setup, but on the delivery.

 

3. Smells are More Important than Patterns

Software changes too quickly for patterns to be useful.  By the time patterns are codified, they are already obsolete.  With experience, however, also comes a knowledge of what doesn’t work, and it is rarely the case that something that hasn’t worked in the past will somehow magically start working in the present.

 

A smell is a short-hand term for things you know will not work.  While Richard Feynman was known as a gifted physicist at Los Alamos during the Manhattan Project, his peculiar talent was in intuiting what not to do.  He would go from project to project and, without knowing all the details, be able to tell those involved wether they were on the right path or the wrong path.  Eventually, his intuition about such matters became something other physicists respected and relied upon.

 

Feynman had a natural gift for sniffing out bad smells, which he couldn’t completely explain himself.  With experience, we all come to develop a good nose for smells, which manifest as a sense that something is wrong.  The greatest trick is to trust our respective noses and take smells into account, so that smells become identified as risks which deserve to be monitored.

 

Here are some common smells that emanate from doomed projects:

  • Taking process shortcuts
  • Too many meetings in which nothing is accomplished
  • Custom Frameworks
  • Hysterical laughter (oddly a common phenomenon when people start realizing there is something wrong with the project but are not willing to say so)
  • Secrets between development roles
  • No time to document
  • No project plan
  • No hard deadlines
  • No criteria for the success of a project
  • No time for testing
  • Political fights between development roles (these are typically a symptom of a bad project, rather than a cause)
  • Managers who say that everything is on track
  • Managers who initially were setting themselves up to take all the credit are now positioning themselves to avoid all blame

 

4. Shame Drives Good Code

The most successful innovations in software development over the past twenty years have been shame based.  Pair-programming works because there is always a second set of eyes watching what we are doing.  It is difficult to take shortcuts or ignore standards when someone else is likely to call one on it almost immediately.

 

Code reviews are a more scattershot approach to the same problem.  Given that a code review can treat any piece of production code as a topic of analysis, it behooves the programmer to add comments, break up methods, and a host of other good coding practices in order to avoid having other developers point out obvious laziness. 

 

QA groups are the ultimate wielders of shame as a tool to drive good development practices.  QA brings obvious mistakes to light.  A developer who might try to slip in bad business logic without first verifying that it works, on the self-assurance that it should work, is less likely to so if he knows his private malfeasance might be brought to public light.  While QA is typically bad at catching problems with deep programming logic, they can instill a sense of shame in developers that will lead them to verify and thoroughly unit test their deep logic as they would test their shallow logic, knowing that others are watching. 

 

5. Manage Risk, Not Progress

All the points above are programmer-centric.  Manage risk, not progress is an essential rule for managers, project managers and business analyst.  Because programming tasks can be shifted around, it is common to put off difficult problems till the end, simply because one can.  This makes project plans, which measure all tasks with a common rule stick, notoriously bad at predicting the success level of a project at any given time.

 

Measuring risk is a better way to determine the success of a project.  To my thinking, identifying risk and determining whether risks have been overcome is the chief role of a good project manager.  If he spends the rest of his time surfing the Internet, I couldn’t care less.  Unfortunately, many project managers insist on reading self-help books about leadership and interpret their role to be one of offering inspiration to others.  They often view their roles in this way to such a degree that they tend to refrain from tracking risk, which is always a downer.  And the best way to avoid tracking risk is to never identify it in the first place.

 

6. The Code Must Flow

Code must be treated as disposable.  It must be treated as a commodity.  It is a common feature of coding to treat every piece of code as special.  On the other hand, we know that it isn’t true when we review our code months later.  What is truly gained in coding through a problem is not the physical code itself, but the knowledge of how to solve the problem. 

 

By constantly coding through every phase of a project, we gain knowledge of the problem domain.  By disposing of our code when we know it doesn’t work, and starting over, we learn not to make fetishes of our code.

 

The second worst programming practice is when a prototype is turned into a real application.  This is a problem for developers.  It seems like it will save time to simply build on the same code that we started prototyping with, despite the fact that it fails to implement good practices or commenting or any of the other things we have come to expect from good code.

 

The worst programming practice is when management asks how long it will take to release a prototype, and demands the code be sent out as is.  There is no answer to this worst of all programming practices, for while nothing can straighten out the crooked timber of humanity, poor management can always warp it further.

Why do software projects fail?

penseur

In the Theatetus, Plato writes that ‘philosophy begins in wonder’.  Here is Jowett’s translation of the passage:

Soc: …I believe that you follow me, Theaetetus; for I suspect that you have thought of these questions before now.

Theaet. Yes, Socrates, and I am amazed when I think of them; by the Gods I am! and I want to know what on earth they mean; and there are times when my head quite swims with the contemplation of them.

Soc. I see, my dear Theaetetus, that Theodorus had a true insight into your nature when he said that you were a philosopher, for wonder is the feeling of a philosopher, and philosophy begins in wonder. He was not a bad genealogist who said that Iris (the messenger of heaven) is the child of Thaumas (wonder).

In the Metaphysics, Aristotle continues the same theme (tr. W. D. Ross):

For it is owing to their wonder {ex archês men ta procheira tôn atopôn thaumasantes} that men both now begin and at first began to philosophize; they wondered originally at the obvious difficulties, then advanced little by little and stated difficulties about the greater matters, e.g. about the phenomena of the moon and those of the sun and of the stars, and about the genesis of the universe. And a man who is puzzled and wonders thinks himself ignorant (whence even the lover of myth is in a sense a lover of Wisdom, for the myth is composed of wonders); therefore since they philosophized in order to escape from ignorance, evidently they were pursuing science in order to know, and not for any utilitarian end.

This wonder tradition in philosophy has two principles. One is that wonder must lead to the articulation of questions, for without questions and dialectic, wonder never goes any further.  The second is that questioning must be non-utilitarian, and that its end must be contemplation, rather than the solution to a practical problem; in other words, questions must be open-ended in order to count as philosophical (i.e., pure scientific) problems.

Thus Aristotle continues, in Book II of the Metaphysics:

The investigation of the truth is in one way hard, in another easy. An indication of this is found in the fact that no one is able to attain the truth adequately, while, on the other hand, we do not collectively fail, but every one says something true about the nature of things, and while individually we contribute little or nothing to the truth, by the union of all a considerable amount is amassed. Therefore, since the truth seems to be like the proverbial door, which no one can fail to hit, in this respect it must be easy, but the fact that we can have a whole truth and not the particular part we aim at shows the difficulty of it.

Against this tradition, Descartes, an extremely practical man, argues in Articles 77 and 78 of Passions of the Soul (tr. Stephen H. Voss):

Furthermore, though it is only the dull and stupid who do not have any constitutional inclination toward Wonder {l’admiration}, this is not to say that those who have the most intelligence are always the most inclined to it.  For it is mainly those who, even though they have a good deal of common sense, still do not have a high opinion of their competence [who are most inclined toward wonder].

And though this passion seems to diminish with use, because the more one encounters rare things one wonders at, the more one routinely ceases wondering at them and comes to think that any that may be presented thereafter will be ordinary, nevertheless, when it is excessive and makes one fix one’s attention solely upon the first image of presented objects without acquiring any other knowledge of them, it leaves behind a habit which disposes the soul to dwell in the same way upon all other presented objects, provided they appear the least bit new to it.  This is what prolongs the sickness of the blindly curious — that is, those who investigate rarities only to wonder at them and not to understand them.  For they gradually become so given to wonder that things of no importance are no less capable of engaging them than those whose investigation is more useful.

Descartes found many ways to break with the Aristotelian tradition that had dominated Western thought for over a millennium, but none, I think, more profound than this dismissal of the wonder tradition.  In this brief passage, he places intelligence {l’esprit} above contemplation as the key trait of philosophizing.

A consequence of this is that the nature of philosophical questioning must also change.  In a Cartesian world, questions must have a practical goal.  Difficult problems can be broken down into their components, and if necessary those must be broken down into their components, until we arrive at a series of simple problems that can be solved easily.

Descartes’ position is only strengthened by what is often called the scandal of philosophy: why, given all this time, has philosophy failed to answer the questions it originally set for itself?

  • Does God exist? 
  • Is there life after death? 
  • Do we possess free will?
  • What is happiness, and is it attainable?
  • What is Justice?
  • What is Knowledge?
  • What is Virtue?

Another way to look at the scandal, however, is not as a problem of lacking answers to these questions, but rather as a problem of having an overabundance of answers.  Philosophy, over the centuries, has answered the question of God’s existence with both a yes and a no.  There are five canonical proofs of God’s existence, as well as a multitude of critical analyses of each of these proofs.  We are told that Occam’s Razor, originally a tool of theological discourse, demands that we reject God’s existence.  At the same time, we are told that Occam’s Razor, the principle that simple answers are preferable to complex answers, itself depends on a rational universe for its justification; for the only thing that can guaranty that the universe is simple and comprehensible rather than Byzantine in its complexity, is God Himself.

The scandal of philosophy is itself based on a presupposition: that this overabundance of answers, and the lack of definitive answers, is contrary to the purpose of philosophical questioning.  Yet we know of other traditions in which the lack of answers is indeed a central goal of philosophical questioning.

Zen koans are riddles Zen masters give to their students to help them achieve enlightenment.  Students are expected to meditate on their koans for years, until the koan successfully works its effect on them, bringing them in an intuitive flash into the state of satori

  • Does a dog have Buddha-nature?
  • What is the sound of one hand clapping?
  • What was your original face before you were born?
  • If you meet the Buddha, kill him.

I dabble in the collecting of questions, and with regard to this habit, the observation Descartes makes above about the curious “who investigate rarities only to wonder at them and not to understand them” fits me well.  One of my favorite sets of questions comes from Robert Graves’s The White Goddess, a book which, starting from an analysis of The Romance of Teliesin that makes Frazer’s Golden Bough seem pedestrian by comparison, attempts to unravel the true purpose of poetry and, in the process, answers the following questions:

  • Who cleft the Devil’s foot?
  • What song did the Sirens sing?
  • What name did Achilles use when he hid from the Achaeans in the women’s tent?
  • When did the Fifty Danaids come with their sieves to Britain?
  • What secret was woven into the Gordian Knot?
  • Why did Jehovah create trees and grass before he created the Sun, Moon and stars?
  • Where shall Wisdom be found?

Another set comes from Slavoj Zizek’s Enjoy Your Symptom!, in which Zizek elucidates certain gnomic pronouncements of Jacques Lacan through an analysis of mostly 50’s Hollywood movies:

  • Why does a letter always arrive at its destination?
  • Why is a woman a symptom of man?
  • Why is every act a repetition?
  • Why does the phallus appear?
  • Why are there always two fathers?

Modern life provides its own imponderables:

  • Paper or plastic?
  • Hybrid or Civic?
  • Diet or exercise?
  • Should I invest in my 401K or pay down my college loans?
  • Should I wait to get an HD TV?
  • When shall I pull out of the stock market?

There are no definitive right or wrong answers to these questions.  Rather, how we approach these questions as well as how we respond to them contribute to shaping who we are.  In his short work, Existentialism and Human Emotions, Sartre tells an anecdote about a student who once asked him for advice.  The student wanted to leave home and join the French Resistance.  The reasons were clear to him. The Germans were illegitimate occupiers and it was the duty of every able-bodied Frenchman to fight against them.  At the same time, the student had a sickly mother who required his assistance, and leaving her would not only break her heart, but he might possibly never see her again.  To leave her would entail sacrificing his filial duties, while not to leave her would entail abandoning his moral duty.  To this student, caught in the grips of a mortal quandary,  Sartre offered the unexpected advice: choose!

…[I]n  creating the man that we want to be, there is not a single one of our acts which does not at the same time create an image of man as we think he ought to be.  To choose to be this or that is to affirm at the same time the value of what we choose, because we can never choose evil.  We always choose the good, and nothing can be good for us without being good for all.

But this isn’t the whole truth.  There are also choices we make that appear arbitrary at the time, committed without any thought of ‘man as he ought to be,’ but which turn out to have irreversible consequences upon who we become.  In the film The Goalie’s Anxiety at the Penalty Kick, Wim Wenders follows a goalie who is kicked off of his soccer team after failing to defend against a penalty kick that costs his team the game.  The goalie proceeds aimlessly through a series of pointless and reprehensible acts.  I once asked a soccer-playing friend if the circumstances of the penalty kick are as they were described in the movie, and he said yes.  Before the penalty kick, against a skilled opponent, the goalie has no idea which way the ball will go.  He stands in the middle between the two posts and must choose which in direction he will leap, without enough information to determine whether his choice is the right one or the wrong one.  The only thing he knows is that if he does not leap, he will be certain to fail.

All these theoretical questions, and the survey of the theory of questioning in general, are intended to provide the background necessary for answering the very practical question:  Why do software projects fail?

Fred Brooks’s succinct answer to this is: software projects fail because software projects are very hard to do.  In other words, we are asking the wrong question.  A better way to phrase the problem is “Why are we still surprised when software projects fail?”

This question might be extended to other fields of endeavor:

  • Why are term papers turned in late?
  • Why do we fail to pay our bills on time?
  • Why do we lie when we are asked over the phone, “Were you sleeping?”

In the discipline of software development, it is often found as one item in a larger list of software koans:

  • Why do software projects fail?
  • Why does adding additional developers to a project slow the project down?
  • Why does planning extra time to complete a task result in no additional work being done?
  • Why do developers always underestimate?
  • Why are expectations always too high?
  • Why does no one ever read documentation?
  • Why do we write documentation that no one ever reads?
  • Why is the first release of any code full of bugs?

Here we have an overabundance of questions that can all be answered in the same way.  Kant phrased the answer in this way:

Out of the crooked timber of humanity, no straight thing was ever made.

Aristotle phrases it thus in Metaphysics II:

Perhaps, too, as difficulties are of two kinds, the cause of the present difficulty is not in the facts but in us.

Technical Interview Questions

flogging

Interviewing has been on a my mind, of late, as my company is in the middle of doing quite a bit of hiring.  Technical interviews for software developers are typically an odd affair, performed by technicians who aren’t quite sure of what they are doing upon unsuspecting job candidates who aren’t quite sure of what they are in for.

Part of the difficulty is the gap between hiring managers, who are cognizant of the fact that they are not in position to evaluate the skills of a given candidate, and the in-house developers, who are unsure of what they are supposed to be looking for.  Is the goal of a technical interview to verify that the interviewee has the skills she claims to possess on her resume?  Is it to rate the candidate against some ideal notion of what a software developer ought to be?  Is it to connect with a developer on a personal level, thus assuring through a brief encounter that the candidate is someone one will want to work with for the next several years?  Or is it merely to pass the time, in the middle of more pressing work, in order to have a little sport and give job candidates a hard time?

It would, of course, help if the hiring manager were able to give detailed information about the kind of job that is being filled, the job level, perhaps the pay range — but more often than not, all he has to work with is an authorization to hire “a developer”, and he has been tasked with finding the best that can be got within limiting financial constraints.  So again, the onus is upon the developer-cum-interviewer to determine his own goals for this hiring adventure.

Imagine yourself as the technician who has suddenly been handed a copy of a resume and told that there is a candidate waiting in the meeting room.  As you approach the door of the meeting room, hanging slightly ajar, you consider what you will ask of him.  You gain a few more minutes to think this over as you shake hands with the candidate, exchange pleasantries, apologize for not having had time to review his resume and look blankly down at the sheet of buzzwords and dates on the table before you.

Had you more time to prepare in advance, you might have gone to sites such as Ayenda’s blog, or techinterviews.com, and picked up some good questions to ask.  On the other hand, the value of these questions is debatable, as it may not be clear that these questions are necessarily a good indicator that the interviewee had actually been doing anything at his last job.  He may have been spending his time browsing these very same sites and preparing his answers by rote.  It is also not clear that understanding these high-level concepts will necessarily make the interviewee good in the role he will eventually be placed in, if hired. 

Is understanding how to compile a .NET application with a command line tool necessarily useful in every (or any) real world business development task?  Does knowing how to talk about the observer pattern make him a good candidate for work that does not really involve developing monumental code libraries?  On the other hand, such questions are perhaps a good gauge of the candidate’s level of preparation for the interview, and can be as useful as checking the candidate’s shoes for a good shine to determine how serious he is about the job and what level of commitment he has put into getting ready for it.  And someone who prepares well for an interview will, arguably, also prepare well for his daily job.

You might also have gone to Joel Spolsky’s blog and read The Guerrilla Guide To Interviewing in order to discover that what you are looking for is someone who is smart and gets things done.  Which, come to think of it, is especially helpful if you are looking for superstar developers and have the money to pay them whatever they want.  With such a standard, you can easily distinguish between the people who make the cut and all the other maybe candidates.  On the other hand, in the real world, this may not be an option, and your objective may simply be to distinguish between the better maybe candidates and the less-good maybe candidates.  This task is made all the harder since you are interviewing someone who is already a bit nervous and, maybe, has not even been told, yet, what he will be doing in the job (look through computerjobs.com sometime to see how remarkably vague most job descriptions are) for which he is interviewing.

There are many guidelines available online giving advice on how to identify brilliant developers (but is this really such a difficult task?)  What there is a dearth of is information on how to identify merely good developers — the kind that the rest of us work with on a daily basis and may even be ourselves.  Since this is the real purpose of 99.9% of all technical interviews, to find a merely good candidate, following online advice about how to find great candidates may not be particularly useful, and in fact may even be counter-productive, inspiring a sense of inferiority and persecution in a job candidate that is really undeserved and probably unfair.

Perhaps a better guideline for finding candidates can be found not in how we ought to conduct interviews in an ideal world (with unlimited budgets and unlimited expectations), but in how technical interviews are actually conducted in the real world.  Having done my share of interviewing, watching others interview, and occasionally being interviewed myself, it seems to me that in the wild, technical interviews can be broken down into three distinct categories.

Let me, then, impart my experience, so that you may find the interview technique most appropriate to your needs, if you are on that particular side of the table, or, conversely, so that you may better envision what you are in for, should you happen to be on the other side of the table.  There are three typical styles of technical interviewing which I like to call: 1) Jump Through My Hoops, 2) Guess What I’m Thinking, and 3) Knock This Chip Off My Shoulder.

 

Jump Through My Hoops

tricks

Jump Through My Hoops is, of course, a technique popularized by Microsoft and later adopted by companies such as Google.  In its classical form, it requires an interviewer to throw his Birkenstock shod feet over the interview table and fire away with questions that have nothing remotely to do with programming.  Here are a few examples from the archives.  The questions often involve such mundane objects as manhole covers, toothbrushes and car transmissions, but you should feel free to add to this bestiary more philosophical archetypes such as married bachelors, morning stars and evening stars, Cicero and Tully,  the author of Waverly, and other priceless gems of the analytic school.  The objective, of course, is not to hire a good car mechanic or sanitation worker, but rather to hire someone with the innate skills to be a good car mechanic or sanitation worker should his IT role ever require it.

Over the years, technical interviewers have expanded on the JTMH with tasks such as writing out classes with pencil and paper, answering technical trivia, designing relational databases on a whiteboard, and plotting out a UML diagram with crayons.  In general, the more accessories required to complete this type of interview, the better.

Some variations of JTMH rise to the level of Jump Through My Fiery Hoops.  One version I was involved with required calling the candidate the day before the job interview and telling him to write a complete software application to specification, which would then be picked apart by a team of architects at the interview itself.  It was a bit of overkill for an entry-level position, but we learned what we needed to out of it.  The most famous JTMFH is what Joel Spolsky calls The Impossible Question, which entails asking a question with no correct answer, and requires the interviewer to frown and shake his head whenever the candidate makes any attempt to answer the question.  This particular test is also sometimes called the Kobayashi Maru, and is purportedly a good indicator of how a candidate will perform under pressure.

 

Guess What I’m Thinking

brain

Guess What I’m Thinking, or GWIT, is a more open ended interview technique.  It is often adopted by interviewers who find JTMH a bit too constricting.  The goal in GWIT is to get through an interview with the minimum amount of preparation possible.  It often takes the form, “I’m working on such-and-such a project and have run into such-and-such a problem.  How would you solve it?”  The technique is most effective when the job candidate is given very little information about either the purpose of the project or the nature of the problem.  This establishes for the interviewer a clear standard for a successful interview: if the candidate can solve in a few minutes a problem that the interviewer has been working on for weeks, then she obviously deserves the job.

A variation of GWIT which I have participated in requires showing a candidate a long printout and asking her, “What’s wrong with this code?”  The trick is to give the candidate the impression that there are many right answers to this question, when in fact there is only one, the one the interviewer is thinking of.  As the candidate attempts to triangulate on the problem with hopeful answers such as “This code won’t compile,” “There is a bracket missing here,” “There are no code comments,” and “Is there a page missing?” the interviewer can sagely reply “No, that’s not what I’m looking for,” “That’s not what I’m thinking of, “That’s not what I’m thinking of, either,” “Now you’re really cold” and so on.

This particular test is purportedly a good indicator of how a candidate will perform under pressure.

 

Knock This Chip Off My Shoulder

eveready

KTCOMS is an interviewing style often adopted by interviewers who not only lack the time and desire to prepare for the interview, but do not in fact have any time for the interview itself.  As the job candidate, you start off in a position of wasting the interviewer’s time, and must improve his opinion of you from there.

The interviewer is usually under a lot of pressure when he enters the interview room.  He has been working 80 hours a week to meet an impossible deadline his manager has set for him.  He is emotionally in a state of both intense technical competence over a narrow area, due to his life-less existence for the past few months, as well as great insecurity, as he has not been able to satisfy his management’s demands. 

While this interview technique superficially resembles JTMFH, it is actually quite distinct in that, while JTMFH seeks to match the candidate to abstract notions about what a developer ought to know, KTCOMS is grounded in what the interviewer already knows.  His interview style is, consequently, nothing less that a Nietzschean struggle for self-affirmation.  The interviewee is put in the position of having to prove herself superior to the interviewer or else suffer the consequences.

Should you, as the interviewer, want to prepare for KTCOMS, the best thing to do is to start looking up answers to obscure problems that you have encountered in your recent project, and which no normal developer would ever encounter.  These types of questions, along with an attitude that the job candidate should obviously already know the answers, is sure to fluster the interviewee. 

As the interviewee, your only goal is to submit to the superiority of the interviewer.  “Lie down” as soon as possible.  Should you feel any umbrage, or desire to actually compete with the interviewer on his own turf, you must crush this instinct.  Once you have submitted to the interviewer (in the wild, dogs generally accomplish this by lying down on the floor with their necks exposed, and the alpha male accepts the submissive gesture by laying its paw upon the submissive animal) he will do one of two things;  either he will accept your acquiescence, or he will continue to savage you mercilessly until someone comes in to pull him away.

This particular test is purportedly a good indicator of how a candidate will perform under pressure.

 

Conclusion

moderntimes

I hope you have found this survey of common interviewing techniques helpful.  While I have presented them as distinct styles of interviewing, this should certainly not discourage you from mixing-and-matching them as needed for your particular interview scenario.  The schematism I presented is not intended as prescriptive advice, but merely as a taxonomy of what is already to be found in most IT environments, from which you may draw as you require.  You may, in fact, already be practicing some of these techniques without even realizing it.

Jabberwocky

 

Download SAPISophiaDemo.zip – 2,867.5 KB

 

Following on the tail of the project I have been working on for the past month, a chatterbox (also called a chatbot) with speech recognition and text-to-speech functionality, I came across the following excerpted article in The Economist, available here if you happen to be a subscriber, and here if you are not:

 

Chatbots have already been used by some companies to provide customer support online via typed conversations. Their understanding of natural language is somewhat limited, but they can answer basic queries. Mr Carpenter wants to combine the flexibility of chatbots with the voice-driven “interactive voice-response” systems used in many call centres to create a chatbot that can hold spoken conversations with callers, at least within a limited field of expertise such as car insurance.

This is an ambitious goal, but Mr Carpenter has the right credentials: he is the winner of the two most recent Loebner prizes, awarded in an annual competition in which human judges try to distinguish between other humans and chatbots in a series of typed conversations. His chatbot, called Jabberwacky, has been trained by analysing over 10m typed conversations held online with visitors to its website (see jabberwacky.com). But for a chatbot to pass itself off as a human agent, more than ten times this number of conversations will be needed, says Mr Carpenter. And where better to get a large volume of conversations to analyse than from a call centre?

Mr Carpenter is now working with a large Japanese call-centre company to develop a chatbot operator. Initially he is using transcripts of conversations to train his software, but once it is able to handle queries reliably, he plans to add speech-recognition and speech-synthesis systems to handle the input and output. Since call-centre conversations tend to be about very specific subjects, this is a far less daunting task than creating a system able to hold arbitrary conversations.

 

Jabberwacky is a slightly different beast than the AIML infrastructure I used in my project.  Jabberwacky is a heuristics based technology, whereas AIML is a design-based one that requires somebody to actually anticipate user interactions and try to script them.

All the same, it is a pleasant experience to find that one is serendipidously au courant, when one’s intent was to be merely affably retro.

SophiaBot: What I’ve been working on for the past month…

I have been busy in my basement constructing a robot with which I can have conversations and play games.  Except that the robot is more of a program, and I didn’t build the whole thing up from scratch, but instead cobbled together pieces that other people have created.  I took an Eliza-style interpreter written by Nicholas H.Tollervey (this is the conversation part) along with some scripted dialogs by Dr. Richard S. Wallace and threw it together with a Z-machine program written by Jason Follas, which allows my bot to play old Infocom games like Zork and The Hitchhiker’s Guide to the Galaxy.  I then wrapped these up in a simple workflow and added some new Vista\.NET 3.0 speech recognition and speech synthesis code so the robot can understand me.

I wrote an article about it for CodeProject, a very nice resource that allows developers from around the world to share their code and network.  The site requires registration to download code however, so if you want to play with the demo or look at the source code, you can also download them from this site.

Mr. Tollervey has a succint article about the relationship between chatterboxes and John Searle’s Chinese Box problem, which obviates me from responsibility for discussing the same.

Instead, I’ll just add some quick instructions:

 

The application is made up of a text output screen, a text entry field, and a default enter button. The initial look and feel is that of an IBX XT theme (the first computer I ever played on). This can be changed using voice commands, which I will cover later. There are three menus initially available. The File menu allows the user to save a log of the conversation as a text file. The Select Voice menu allows the user to select from any of the synthetic voices installed on her machine. Vista initially comes with “Anna”. Windows XP comes with “Sam”. Other XP voices are available depending on which versions of Office have been installed over the lifetime of that particular instance of the OS. If the user is running Vista, then the Speech menu will allow him to toggle speech synthesis, dictation, and the context-free grammars. By doing so, the user will have the ability to speak to the application, as well as have the application speak back to him. If the user is running XP, then only speech synthesis is available, since some of the features provided by .NET 3.0 and consumed by this application do not work on XP.

The appearance menu will let you change the look and feel of the text screen.  I’ve also added some pre-made themes at the bottom of the appearnce menu.  If, after chatting with SophiaBot for a while, you want to play a game, just type or say “Play game.”  SophiaBot will present you with a list of the games available (you can add more, actually, simply by dropping additional game files you find on the internet into the Program Files\Imaginative Universal\SophiaBot\Game Data\DATA folder (Jason’s Z-Machine implementation plays games that use version 3 and below of the game engine.  I’m looking (rather lazily) into how to support later versions.  You can go here to download more Zork-type games.  During a game, type or say “Quit” to end your session. “Save” and “Restore” keep track of your current position in the game, so you can come back later and pick up where you left off.

Speech recognition in Vista has two modes: dictation and context-free recognition. Dictation uses context, that is, an analysis of preceding words and words following a given target of speech recognition, in order to determine what word was intended by the speaker. Context-free speech recognition, by way of contrast, uses exact matches and some simple patterns in order to determine if certain words or phrases have been uttered. This makes context-free recognition particularly suited to command and control scenarios, while dictation is particularly suited to situations where we are simply attempting to translate the user’s utterances into text.

You should begin by trying to start up a conversation with Sophia using the textbox, just to see how it works, as well as her limitations as a conversationalist. Sophia uses certain tricks to appear more lifelike. She throws out random typos, for one thing. She also is a bit slower than a computer should really be. This is because one of the things that distinguish computers from people is the way they process information — computers do it quickly, and people do it at a more leisurely pace. By typing slowly, Sophia helps the user maintain his suspension of disbelief. Finally, if a text-to-speech engine is installed on your computer, Sophia reads along as she types out her responses. I’m not certain why this is effective, but it is how computer terminals are shown to communicate in the movies, and it seems to work well here, also. I will go over how this illusion is created below.

In Command\AIML\Game Lexicon mode, the application generates several grammar rules that help direct speech recognition toward certain expected results. Be forewarned: initially loading the AIML grammars takes about two minutes, and occurs in the background. You can continue to touch type conversations with Sophia until the speech recognition engine has finished loading the grammars and speech recognition is available. Using the command grammar, the user can make the computer do the following things: LIST COLORS, LIST GAMES, LIST FONTS, CHANGE FONT TO…, CHANGE FONT COLOR TO…, CHANGE BACKGROUND COLOR TO…. Besides the IBM XT color scheme, a black papyrus font on a linen background also looks very nice. To see a complete list of keywords used by the text-adventure game you have chosen, say “LIST GAME KEYWORDS.” When the game is initially selected, a new set of rules is created based on different two word combinations of the keywords recognized by the game, in order to help speech recognition by narrowing down the total number of phrases it must look for.

In dictation mode, the underlying speech engine simply converts your speech into words and has the core SophiaBot code process it in the same manner that it processes text that is typed in. Dictation mode is sometimes better than context-free mode for non-game speech recognition, depending on how well the speech recognition engine installed on your OS has been trained to understand your speech patterns. Context-free mode is typically better for game mode. Command and control only works in context-free mode.