I recently found out in a Louis Menand New Yorker review of The Yale Book of Quotations edited by Fred Shapiro (republished in The Best American Essays of 2008 edited by Adam Gopnik, another New Yorker contributor – was the fix in? – which I happened to come across while browsing at a going-out-of-business sale in the local Borders) that the phrase “Shit happens” first appeared in print in 1983 in a publication called UNC-CH Slang. For some reason, I thought the phrase antedated the year the Apple IIe was launched.
An argument can be made that the start of the build up to the BUILD conference is intimately tied to the rise of Apple, the on-again off-again relationship between Steve Jobs and Apple, the strange periods when Microsoft helped keep Apple afloat in order to maintain a legitimate competitor in the face of government investigations, and of course the release of a largish iPod Touch device known as the iPad which inaugurated the post-PC period.
Alternatively, we can say that the build up began during Bob Muglia’s interview with Mary Jo Foley at PDC 2010 (to which BUILD is the successor conference) and his fateful comment concerning Silverlight:
“… our strategy has shifted …”
The quite vociferous Silverlight community loudly denounced Bob Muglia, questioning his authority to declaim on the matter (he was in fact the President of the Microsoft Server and Tools Division at the time), as well as Mary Jo Foley for reporting it. The genie was only put back in the bottle when John Papa, a Microsoft Silverlight evangelist, quickly threw together the Silverlight 4 Firestarter to explain the Silverlight roadmap and Pete Brown, another Silverlight and WPF evangelist, started a more even-keeled and civil dialog about the matter on his blog.
Approximately a year later we find ourselves in the same position. As Bob Muglia indicated in last year’s interview, Silverlight continues to be an important platform for Windows Phone development and will have niches in video streaming and line-of-business applications. Silverlight 5 is still slated for release sometime in 2011. Overall, however, announcements from the Windows Team indicate that a greater emphasis will be placed on HTML5, JQuery and CSS 3 as tools for developing on the new Windows 8 OS while the role of .NET going forward has been more muted. All eyes are on the upcoming BUILD conference on September 13th to find out not only what the Windows 8 \ Tablet strategy from Microsoft will be, but tangentially what the future of Silverlight and .NET in general will be.
(A question can profitably be raised here concerning why the uproar from the Silverlight community was so loud when the WPF community, arguably much larger, effectively rolled over at rumors and signs that the vision for WPF was being curtailed and amid proclamations that WPF was dead. Certainly part of the reason was that WPF developers could generally also see themselves as Silverlight developers and at the time, due to the superior branding of the Silverlight platform, many were in fact trying to do that on their resumes. But where does one go from Silverlight? As interesting as Silverlight for Windows Phone is, it still doesn’t carry the technical gravitas that being a Silverlight developer did. Jupiter/DirectUI might be that next technology, but it would have been reassuring had Steven Sinofsky and his team said a bit more about it. At the same time, many developers who stuck with WPF looked upon the Silverlight developers with a distinct lack of pity and a certain amount of schadenfreude as their world appeared to be crumbling around them.)
Instead of revealing the developer story on the official Windows 8 blog, Microsoft has mostly been mum about the contents of the BUILD conference. The official agenda has been left blank since it was first posted, while unofficial information about Windows 8 has mostly come from sites like WinRumors and Mary Jo Foley’s articles on ZDNet , information like Windows 8 is being built for ARM chips allowing it to be deployed on a greater variety of devices (though apparently not on phones since Windows Phone continues to be developed on its own branch), that it may not support CD Rom drives (really?), that it has a new optional Metro-themed shell for navigation, and that it supports native development in HTML as well as some future XAML-based platform called either Jupiter or DirectUI.
The official blog, in the meantime, has mostly dribbled out pieces on an Office-like interface for Windows Explorer and a new look for copy/paste. They are apparently taking a page out of Steve Jobs’ book and attempting to create mounting excitement by saving all the good stuff up for one big reveal. One can envision a reception similar to the original introduction of the iPhone, though skeptics might also call to mind the mayhem and setting of theater seats on fire that accompanied the premier of Jean Renoir’s The Rules of the Game (privately I hope for this sort of response to an announcement so innovative that it simultaneously creates both fawning fans as well as fulminating foes seething with rage). In either case, the fact that Microsoft is taking this theatrical approach seems to indicate that they have something they believe is going to turn heads.
The challenge and expectations for Windows 8 resembles a sea of giant crabs: prickly and coming from all sides. The new operating system needs to present a broad new vision. It must at the same time be familiar enough that it doesn’t alienate either our grandmothers or the heads of our IT departments. It must work on a tablet (oh yes, it must!) as well as a desktop, while at the same time it must not sacrifice performance. It must sport a new consumer facing look and live up to expanded consumer expectations while at the same time support the slew of legacy applications we have individually and corporately invested in.
In my small corner of the development world, it should also attract the broad range of web developers and designers that have for years avoided anything having to do with Microsoft while also placating the many Microsoft-based developers that have been so loyal to Microsoft over the past decade (though, admittedly, they had their reasons and did quite well for themselves by playing in Microsoft’s stack).
This last point is often overlooked in the debates about how unfortunate Windows 8 might be for Silverlight developers. Attracting the sets of skills and non-Microsoft thinking available in the traditionally non-Microsoft community would be the most revolutionary thing Microsoft has done in a long time. In general, these are people who not only know how to work well with designers but, in smaller shops, are expected to be developer-designer hybrids. It’s a way of looking at the world that the Microsoft community greatly needs in order to reinvigorate the way we do development as we are increasingly expected to build not only secure and stable applications, but also experiences that look as good as the ones people are seeing on their consumer devices – even as the clean separation we’ve always understood to exist between UIs for games and UIs for apps continues to evaporate. Bringing these developers into the Microsoft fold would be a huge treasure drop for Microsoft and certainly worth upsetting a few developers to accomplish. It is also oh … so … difficult, as we found out with the Expression tools over the past few years.
So what will the new Windows 8 be? Here are three possibilities going from the least revolutionary to the most, all of which will undoubtedly be wrong.
1. A New Shell: Son of Gadget
There is an official video available showing off a new look for Windows that ZDNet reports is called the MoSH or Modern Shell that people can opt out of if they prefer a “classic” look. It may also be called “Mosaic” – another code name floating around. It is Metro based and supports multi-touch, resembling the Metro look of Windows Phone in many ways including the tile based design. This was actually an idea publicly proposed around the time Windows Phone was first revealed. Why not simply build a Metro shell on top of Windows 7 and release it as a tablet solution?
Underneath the shell will be a typical next generation Windows supporting Silverlight, WPF, a ribbon bar, and all the legacy technologies and apps we have grown comfortable with. This would be a distinctly old wine into new skins strategy. This is easily doable and Microsoft has experience with it from the “gadgets” concept introduced with Vista and extended in Windows 7. Like gadgets, MoSH components can be developed in a variety of technologies – which would explain some of the discussion around HTML5 vs Jupiter as rival development platforms for Win 8.
2. Web Based Apps: Everything 365
Microsoft has been building the technology around web-based solutions with their Office 365 cloud services, allowing users to employ either web-based versions of office or native apps. Extending this more generally to consumers would allow light weight software on tablets while making full versions of the apps available to desktop users. It would also explain the emphasis on HTML5 as lots of legacy apps would have to be ported to the web in order to make them work on lower-end hardware. On the other hand, we know that even sophisticated applications like Photoshop, for instance, can work over the web.
Why not everything else?
3. Virtual OS: Legacy Mode
This is my fantasy scenario. With Windows 7, Microsoft introduced something called XP Mode, which would allow users to run legacy apps on a virtual machine behind the scenes but have it look like you were running apps in the host OS. Could this concept be extended? One of the things we currently know officially about Windows 8 is it has hyper-V 3 integration of some sort. What if the Jupiter/HTML5 piece really is the biggest part of the new OS (a truly new game changing and light-weight OS) while XP Mode type functionality will be used to support legacy apps transparently for our grandmothers and IT managers.
This would be a remarkable sleight-of-hand, reminding me of a Penn & Teller routine where they go through several layers of misdirection to create the illusion of a man standing by a light pole smoking a cigarette when neither the cigarette or the light pole or the man are real.
What if this could be combined with the Microsoft cloud computing strategy to host VHD’s in the cloud so a person could access their cloud-based OS from any device – a truly Web OS? VMWare, after all, just recently introduced in a similar concept for something they are calling Project AppBlast. It wouldn’t be a huge stretch to believe that Microsoft is thinking along the same lines – especially if it provides an automagical solution for the backwards compatibility conundrum.
That would certainly be something to see as well as something that solves the problem of having a modern consumer look while preserving the power and functionality of traditional and legacy applications. In a scenario like this, all of our current development skills would still be relevant, from Silverlight and WPF to Visual Basic 6 and FoxPro. They would only be second class citizens on the OS to the extent that they are now virtual, rather than native, skills.
But I’m sure the reality will manage to be both more as well as less exciting than any of these suggestions … and the big reveal is only a few weeks away at this point. If I had to give a summary judgment of whether Microsoft’s wait-and-see strategy around Windows 8 has worked, I would have to say yes, absolutely.
As frustrating as the wait has been, the feints, misdirections and long silences have only wetted my appetite to find out what Windows 8 is hiding under the hood.
At the beginning of this month I was invited to give a keynote at the MadExpo conference opposite Jeff Prosise, who gave an amazing Thursday keynote address. This was heady stuff and more due to the subject matter I proposed, “What Recent Breakthroughs in Nerd Psychology Can Teach Us About Software Development”, than my abilities as a public speaker.
The keynote attempted to address several intractable problems in software development using current psychological and neurological research:
- Why hasn’t the proliferation of software frameworks and tools made software development go any faster?
- Why does the obsolete technology we abandoned five years ago always eventually reappear as something new and trendy?
- Why do developers find it impossible to predict how long a task will take until it is completed?
- How much should a developer be paid for a mythical-man-month of work?
- What is this fear of “coupling” that all software architects seem to exhibit?
In the process, I also tried to solve another set of interesting problems:
- Whether the brain can be hacked
- The secret of the Mona Lisa’s smile
- Why multi-tasking is a myth
- Why nerds like puns
- Why the Turing Test is a red herring
The crux of the talk revolved around what is coming to be known as the Autistic Spectrum of Disorders, which includes: Autism, Asperger Syndrome and PDD-NOS (a catch all for disorders that do not meet the requirements for Asperger but approaches it).
Various research has connected these disorders with what is commonly known as “the nerd”. The two papers I cited in the talk were Ioan James’s Autism and Mathematical Talent and Nicholas Putnam’s Revenge or tragedy: Do nerds suffer from a mild pervasive developmental disorder? which was published in Adolescent psychiatry: Developmental and clinical studies, Vol. 17.
Asperger Syndrome and PDD-NOS are interesting here for the qualities these disorders share with what we would think of as a high functioning software developer. Though a bit laudatory, here is a quote from the pediatrician Hans Asperger on the disorder that bears his name:
It seems that for success in science or art a dash of autism is essential. For success the necessary ingredient may be an ability to turn away from the everyday world, from the simple practical, an ability to rethink a subject with originality so as to create in new untrodden ways …
Research on a neurological structure called mirror neurons, which were discovered in the late 80’s, are now believed to be associated with language acquisition and social aptitude. Some researchers identify a flaw in the functioning of mirror neurons with disorders like autism. If this is true, then it provides the clue as to why some people will turn away from social interactions and, in turn, have a large reservoir of concentration and obsessiveness which are turned to other, more mechanical, pursuits.
Pulling this talk together was deeply fascinating and I am indebted to my wife Tamara, a therapist as well as a highly accomplished hypno-therapist, for her help.
I also drew on several popular books for additional material. They are all very readable and fascinating:
The Tell-Tale Brain – V. S. Ramachandran
Sleights of Mind: What the Neuroscience of Magic Reveals About Our Everyday Perceptions — Stephen L. Macknik, Susana Martinez-Conde, Sandra Blakeslee
NurtureShock: New Thinking about Children – Po Bronson, Ashley Merryman
The Invisible Gorilla — Christopher Chabris, Daniel Simons
How We Decide – Johah Lehrer
The Language Instinct – Steven Pinker
… and while not directly used in this talk, a clear inspiration for myself and the other authors in this brief bibliography is the book which inaugurated the “everything you know is wrong” genre of non-fiction books:
Freakonomics – Steven D. Levitt, Stephen J. Dubner
The slides for this talk (which includes the Nerd-Geek-Dork Venn Diagram by Matthew Mason) can be found on my skydrive.
[For reference: a quick indicator of whether you are a nerd or a geek is this: if you can pull of horn-rimmed glasses, you are a geek. if you cannot pull off sunglasses, you are a nerd. I personally have trouble with sunglasses but, for some reason, keep trying.]
From a New Yorker portrait of horror maestro Guillermo Del Toro:
We drove east to Burbank. Del Toro is devoted to the Valley—he calls it “that blessed no man’s land that posh people avoid in L.A.” We pulled into Ribs U.S.A., a frayed establishment on Olive Avenue. Del Toro ordered ribs and a lemonade, along with a redundant appetizer of “riblets.”
I’m always surprised at the rapid rate of technological progress. I often sit and watch my son play through Halo, Call of Duty, or Rock Band and get nostalgic for the simpler days when I played Adventure, Kaboom and Pitfall.
Imagine my amazement, then, when I received the following job posting for a Teleport Engineer in my email:
United States – Georgia – Atlanta
Referral Bonus Eligible
Referral Bonus Amount _TBS
Posting Job Description
Qualifications: 3 to 5 years of experience working as a maintenance engineer in a
earth station and/or broadcast facility.
3 to 5 years experience working with RF, high voltage, emergency power systems and high power RF amplifiers.
Two to four year technical degree, electronics related (or equivalent education/experience/training). Excellent customer services skills with attention to detail.
Demonstrated organizational skills and ability to prioritize and multi-task in a high-stress environment.
A sense of urgency in solving customer requests to ensure timely resolution.
Strong verbal and written communication skills in order to communicate with customers, peers and vendors.
Demonstrated ability to work in a team based environment to ensure 24x7x365 support of our customers.
Duties: Maintenance – troubleshooting, repair, calibration and preventative maintenance of equipment and systems at the teleports. Maintenance is done to ensure that the TBS customers have uplink and downlink resources available when required and to keep outage time to a minimum. In addition, maintenance is imperative to ensure that the teleport transmission and receive systems are compliant; requirements set forth by the Federal Communications Commission (FCC), the satellite providers and OSHA.
Projects – participation in the planning, installation and integration of new or replacement equipment and systems. May act in lead role on small projects.
Quality Control – ability to ensure outgoing, and incoming signals and content are compliant with standards set forth by the FCC, the satellite providers, TBS and good engineering practices. In addition this responsibility includes quality control of installations equipment and documentation utilized by the teleports.
Customer Service – provides the highest quality of customer service to our primary customers – Teleport operations, Distribution technology, CNN satellites/SNG, TEN and the TBS networks – by responding to requests in a timely manner.
Documentation – thoroughly documents problems and issues in the operations log. Timely and accurate completion on assigned projects including wire
numbers, systems documentation, drawings, time sheets, time tracking and work orders.
Turner Broadcasting System, Inc. and its subsidiaries are Equal Opportunity Employers.
And here I’ve been waiting for a job as a flying car mechanic.
A recent release from the Associated Press concerning the Authors Guild’s concerns with the Kindle 2’s text-to-speech feature left many computer programmers guffawing, but it occurs to me that for those not familiar with text-to-speech technology, the humorous implications may not be self-evident, so I will attempt to parse it:
“NEW YORK (AP) — The guild that represents authors is urging writers to be wary of a text-to-speech feature on Amazon.com Inc.’s updated Kindle electronic reading device.
“In a memo sent to members Thursday, the guild says the Kindle 2’s “Read to Me” feature “presents a significant challenge to the publishing industry.”
“The Kindle can read text in a somewhat stilted electronic voice. But the Authors Guild says the quality figures to “improve rapidly.” And the guild worries that could undermine the market for audio books.”
The quality of text-to-speech depends on the library of phonemes available on the reading device and the algorithms used to put them all together. A simple example is when you call the operator and an automated voice reads back a phone number to you with a completely unnatural intonation, and you realize that the pronunciation of each number has been clipped and then taped back together without any sort of context. That is a case, moreover, where the relationship between vocalization and semantics is one-to-one. The semantic meaning of the number “1” is always mapped to the sound of someone pronouncing the word “one”. In the case of speech-to-text, no one has been sitting with the OED and carefully pronouncing every word for a similar one-to-one mapping. Instead, the software program on the reading device must use an algorithm to guess at the set of phonemes that are intended by a collection of letters and generate the sounds it associates with those phonemes.
The problem of intonation is still there, along with the additional issue of the peculiarities of English spelling. If have a GPS system in your car, then you are familiar with the results. Bear in mind that your GPS system, in turn, is bungling up what is actually a very particularized vocabulary. The books that the Kindle’s “Read to Me” feature will be dealing with have more in common with Borges’s labyrinth than Rand McNally’s road atlas.
While text-to-speech technology will indeed improve over time, it won’t be improving in the Kindle 2, which comes with one software bundle that reads in just one way. I worked on a text-to-speech program a while back (if you have Vista, you can download it here) that combines an Eliza engine with the Vista operating system’s text-to-speech functionality. One of the things I immediately wanted to do was to be able to switch out voices, and what I quickly found out was that I couldn’t get any new voices. Vista came with a feminine voice with an American accent, and that was about it unless one wanted to use a feminine voice with a Pidgin-English accent that is included with the Chinese speech pack. The only masculine voice Microsoft provided was available for Windows XP, and it wasn’t forward compatible.
It simply isn’t easy to switch out voices, much less switch out speech engines on a given platform, and seeing that we aren’t paying for a software package when we buy the Kindle but rather only the device (with much less power than a Microsoft operation system), it can be said with some confidence that the Kindle 2 is never going to be able to read like Morgan Freeman.
The Kindle 2’s text-to-speech capabilities, or lack of it, is not going to undermine the market for audio books any more than public lectures by Stephen Hawking will undermine sales of his books. They are simply different things.
“It is telling authors and publishers to consider asking Amazon to disable the audio function on e-books it licenses.”
This is what is commonly referred to as the business requirement from hell. It assumes that something is easy out of a serious misunderstanding of how a given technology actually works. Text-to-speech technology is not based on anything inherent to the books Amazon is trying to peddle. It isn’t, for what this is worth, even associated with metadata about the books Amazon is trying to peddle. Instead, it is a free-roaming program that will attempt to read any text you feed it. Rather than a CD that is sold with the book, it has a greater similarity to a homunculus living inside your computer and reading everything out loud to you.
The proposal from the Authors Guild assumes that something must be taken off of the e-books in order to disable the text-to-speech feature. In fact, instructions not to read those certain e-books must be added to the e-book metadata, and each Kindle 2 homunculus must in turn be taught to look for those instructions and act accordingly, in order to fulfill this requirement. This is a non-trivial rewrite of the underlying Kindle software as well as of the thousands of e-book images that Amazon will be selling — nor can the files already living on people’s devices be recalled to add the additional metadata.
“Amazon spokesman Drew Herdener said the company has the proper license for the text-to-speech function, which comes from Nuance Communications Inc.”
This is just a legalese on Amazon’s part that intentionally misunderstands the Authors Guild’s concerns as well as the legal issues involved. The Authors Guild isn’t accusing Amazon of not having rights to the text-to-speech software. They are asking whether using text-to-speech on their works doesn’t violate pre-existing law.
The answer to that, in turn, concerns metaphors, as many legal matters ultimately do. What metaphor does text-to-speech fall under? Is it like a CD of a reading of a book, which generates additional income from an author’s labor? Or is it like hiring Morgan Freeman to read Dianetics to you? In which case, beyond the price of the physical book, Mr. Freeman should certainly be paid, but the Church of Scientology should not.
Like may others, I recently received the fateful email notifying me that Lutz Roeder will be giving up his work on .NET Reflector, the brilliant and essential tool he developed to peer into the internal implementation of .NET assemblies. Of course the whole idea of reflecting into an assembly is cheating a bit, since one of the principles of OO design is that we don’t care about implementations, only about contracts. It gets worse, since one of the main reasons for using .NET Reflector is to reverse engineer someone else’s (particularly Microsoft’s) code. Yet it is the perfect tool when one is good at reading code and simply needs to know how to do something special — something that cannot be explained, but must be seen.
While many terms in computer science are drawn from other scientific fields, reflection appears not to be. Instead, it is derived from the philosophical “reflective” tradition, and is a synonym for looking inward: introspection. Reflection and introspection are not exactly the same thing, however. This is a bit of subjective interpretation, of course, but it seems to me that unlike introspection, which is merely a turning inward, reflection tends to involve a stepping outside of oneself and peering at oneself. In reflection, there is a moment of stopping and stepping back; the “I” who looks back on oneself is a cold and appraising self, cool and objective as a mirror.
Metaphors pass oddly between the world of philosophy and the world of computer science, often giving rise to peculiar reversals. When concepts such as memory and CPU’s were being developed, the developers of these concepts drew their metaphors from the workings of the human mind. The persistent storage of a computer is like the human faculty of memory, and so it was called “memory”. The CPU works like the processing of the mind, and so we called it the central processing unit, sitting in the shell of the computer like a homunculus viewing a theater across which data is streamed. Originally it was the mind that was the given, while the computer was modeled upon it. Within a generation, the flow of metaphors has been reversed, and it is not uncommon find arguments about the computational nature of the brain based on analogies with the workings of computers. Isn’t it odd that we remember things, just like computers remember things?
The ancient Skeptics had the concept of epoche to describe this peculiar attitude of stepping back from the world, but it wasn’t until Descartes that this philosophical notion became associated with the metaphor of optics. In a letter to Arnauld from 1648, Descartes writes:
“We make a distinction between direct and reflective thoughts corresponding to the distinction we make between direct and reflective vision, one depending on the first impact of the rays and the other on the second.”
This form of reflective thought, in turn, also turns up in at an essential turning point in Descartes’ discussion of his Method, when he realizes that his moment of self-awareness is logically dependent on something higher:
“In the next place, from reflecting on the circumstance that I doubted, and that consequently my being was not wholly perfect, (for I clearly saw that it was a greater perfection to know than to doubt,) I was led to inquire whence I had learned to think of something more perfect than myself;”
Descartes uses the metaphor in several places in the Discourse on Method. In each case, it is as if, after doing something, for instance doubting, he is looking out the corner of his eye at a mirror to see what he looks like when he is doing it, like an angler trying to perfect his cast or an orator attempting to improve his hand gestures. In each case, what one sees is not quite what one expects to see; what one does is not quite what one thought one was doing. The act of reflection provides a different view of ourselves from what we might observe from introspection alone. For Descartes, it is always a matter of finding out what one is “really” doing, rather than what one thinks one is doing.
This notion of philosophical “true sight” through reflection is carried forward, on the other side of the channel, by Locke. In his Essay Concerning Human Understanding, Locke writes:
“This source of ideas every man has wholly in himself; and though it be not sense, as having nothing to do with external objects, yet it is very like it, and might properly enough be called internal sense. But as I call the other Sensation, so I call this REFLECTION, the ideas it affords being such only as the mind gets by reflecting on its own operations within itself. By reflection then, in the following part of this discourse, I would be understood to mean, that notice which the mind takes of its own operations, and the manner of them, by reason whereof there come to be ideas of these operations in the understanding.”
Within a century, reflection becomes so ingrained in philosophical thought, if not identified with it, that Kant is able to talk of “transcendental reflection”:
“Reflection (reflexio) is not occupied about objects themselves, for the purpose of directly obtaining conceptions of them, but is that state of the mind in which we set ourselves to discover the subjective conditions under which we obtain conceptions.
“The act whereby I compare my representations with the faculty of cognition which originates them, and whereby I distinguish whether they are compared with each other as belonging to the pure understanding or to sensuous intuition, I term transcendental reflection.”
In the 20th century, the reflective tradition takes a peculiar turn. While the phenomenologists continued to use it as the central engine of their philosophizing, Wilfred Sellars began his attack on “the myth of the given” upon which phenomenological reflection depended. From an epistemological viewpoint, Sellars questions the implicit assumption that we, as thinking individuals, have any privileged access to our own mental states. Instead, Sellars posits that what we actually have is not clear vision of our internal mental states, but rather a culturally mediated “folk psychology” of mind that we use to describe those mental states. In one fell swoop, Sellars sweeps away the Cartesian tradition of self-understanding that informs the cogito ergo sum.
In a sense, however, this isn’t truly a reversal of the reflective tradition but merely a refinement. Sellars and his contemporary heirs, such as the Churchlands and Daniel Dennett, certainly provided a devastating blow to the reliability of philosophical introspection. The Cartesian project, however, was not one of introspection, nor is the later phenomenological project. The “given” was always assumed to be unreliable in some way, which is why philosophical “reflection” is required to analyze and correct the “given.” All that Sellars does is to move the venue of philosophical reflection from the armchair to the laboratory, where it no doubt belongs.
A more fundamental attack on the reflective tradition came from Italy approximately 200 hundred years before Sellars. Giambattista Vico saw the danger of the Cartesian tradition of philosophical reflection as lying in its undermining of the given of cultural institutions. A professor of oratory and law, Vico believed that common understanding held a society together, and that the dissolution of civilizations occurred not when those institutions no longer held, but rather when we begin to doubt that they even exist. On the face of it, it sounds like the rather annoying contemporary arguments against “cultural relativism”, but is actually a bit different. Vico’s argument is rather that we all live in a world of myths and metaphors that help us to regulate our lives, and in fact contribute to what makes us human, and able to communicate with one another. In the 1730 edition of the New Science, Vico writes:
“Because, unlike in the time of the barbarism of sense, the barbarism of reflection pays attention only to the words and not to the spirit of the laws and regulations; even worse, whatever might have been claimed in these empty sounds of words is believed to be just. In this way the barbarism of reflection claims to recognize and know the just, what the regulations and laws intend, and endeavors to defraud them through the superstition of words.”
For Vico, the reflective tradition breaks down those civil bonds by presenting man as a rational man who can navigate the world of social institutions as an individual, the solitary cogito who sees clearly, and cooly, the world as it is.
This begets the natural question, does reflection really provide us with true sight, or does it merely dissociate ourselves from our inner lives in such a way that we only see what we want to see? In computer science of course (not that this should be any guide to philosophy) the latter is the case. Reflection is accomplished by publishing metadata about a code library which may or may not be true. It does not allow us to view the code as it really is, but rather provides us a mediated view of the code, which is then associated with the code. We assume it is reliable, but there is no way of really knowing until something goes wrong.
Anne Applebaum has written one of the better obituaries for Alexander Solzhenitsyn in her column for the Washington Post:
"Even Solzhenitsyn’s expulsion from Russia in 1974 only increased his notoriety, as well as the impact of "The Gulag Archipelago." Though it was based on "reports, memoirs and letters by 227 witnesses," the book was not quite a straight history — obviously, Solzhenitsyn did not have access to then-secret archives — but, rather, an interpretation of history. Partly polemical, partly autobiographical, both emotional and judgmental, it aimed to show that, contrary to what many believed, the mass arrests and concentration camps were not an incidental phenomenon but an essential part of the Soviet system — and had been from the very beginning.
"Not all of this was new: Credible witnesses had reported on the growth of the Gulag and the spread of terror since the Russian Revolution. But what Solzhenitsyn produced was simply more thorough, more monumental and more detailed than anything that had preceded it. His account could not be dismissed as a single man’s experience. No one who dealt with the Soviet Union, diplomatically or intellectually, could ignore it. So threatening was the book to certain branches of the European left that Jean-Paul Sartre himself described Solzhenitsyn as a "dangerous element." Its publication certainly contributed to the recognition of "human rights" as a legitimate element of international debate and foreign policy.
"His manuscripts were read and pondered in silence, and the thought he put into them provoked his readers to think, too. In the end, his books mattered not because he was famous or notorious but because millions of Soviet citizens recognized themselves in his work: They read his books because they already knew that they were true."
It is a peculiar meme in Western Culture that, at some level, the evil of Stalin’s Soviet regime cannot be viewed on the same level as, say, Hitler’s Third Reich. It sometimes takes the form of faint attempts to explain it away, or to see it as an aberration of the Soviet state, and generally ends in a change of subject. This aura of lingering romanticism about the Soviet State among Westerners is odd and, I think, rather inexplicable. A meme is probably the best way to describe it.
In Russia itself, the attitude is perhaps easier to understand. No one likes to be reminded of their own sins, and no one likes bad news that is unlikely to gain them anything. In her book, Gulag: A History, Applebaum describes the typical reactions of people she encounters in Russia once they discover that she is doing a historical investigation of the Gulag system.
"At first, my presence only added to their general merriment. It is not every day one meets a real American on a rickety ferry boat in the middle of the White Sea, and the oddity amused them. They wanted to know why I spoke Russian, what I thought of Russia, how it differs from the United States. When I told them what I was doing in Russia, however, they grew less cheerful. An American on a pleausre cruise, visiting the Solovetsky Islands to see the scenery and the beautiful old monastery — that was one thing. An American visiting the Solovetsky Islands to see the remains of the concentration camp — that was something else.
"One of the men turned hostile. ‘Why do you foreigners only care about the ugly things in our history?’ he wanted to know. ‘Why write about the Gulag? Why not write about our achievements? We were the first country to put a man into space!’ By ‘we’ he meant ‘we Soviets.’
"His wife attacked me as well. ‘The Gulag isn’t relevant anymore,’ she told me. ‘We have other troubles here. We have unemployment, we have crime. Why don’t you write about our real problems, instead of things that happened a long time ago?’
"In my subsequent travels around Russia, I encountered these four attitudes to my project again and again. ‘It’s none of your business,’ and ‘it’s irrelevant’ were both common reactions. Silence — or an absence of opinion, as evinced by a shrug of the shoulders — was probably the most frequent reaction. But there were also people who understood why it was important to know about the past…"
This is toward the end of the book. Just as interesting is how the book begins, with an observation on a bridge.
"Yet although they lasted as long as the Soviet Union itself, and although many millions of people passed through them, the true history of the Soviet Union’s concentration camps was, until recently, not at all well known. By some measures, it is still not known. Even the bare facts recited above, although by now familiar to most Western scholars of Soviet history, have not filtered into Western popular consciousness.
"I first became aware of this problem several years ago, when walking across the Charles Bridge, a major tourist attraction in what was then newly democratic Prague. There were buskers and hustlers along the bridge, and every fifteen feet or so someone was selling precisely what one would expect to find for sale in such a postcard-perfect spot. Paintings of appropriately pretty streets were on display, along with bargain jewelry and ‘Prague’ key chains. Among the bric-a-brac, one could buy Soviet military paraphernalia: caps, badges, belt buckles, and little pins, the tin Lenin and Brezhnev images that Soviet schoolchildren once pinned to their uniforms.
"The sight struck me as odd. Most of the people buying the Soviet paraphernalia were Americans and West Europeans. All would be sickened by the thought of wearing a swastika. None objected, however, to wearing the hammer and sickle on a T-shirt or a hat. It was a minor observation, but sometimes, it is through just such minor observations that a cultural mood is best observed. For here, the lesson could not have been clearer: while the symbol of one mass murder fills us with horror, the symbol of another murder makes us laugh."
I do not play the game myself, but a friend tells me that there is a similar controversy in Civilization IV concerning the presence of Stalin as a player character in this PC game and the absence of Hitler. Here is a small flame war over it, with links to more flame wars. Another friend, who is ethnic Chinese, resents the presence of Mao in the game.
Perhaps the greatest trick the Devil ever played, to paraphrase Kaiser Sose, was to convince people that he was Adolf Hitler, while men like Alexander Solzhenitsyn worked to convince us that things were otherwise. To quote the man himself:
"If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?” – The Gulag Archipelago
According to tradition, the tritone was called the Devil’s Chord or the Diabolis in Musica, a sound so dissonant and so puissant it was believed to be capable of raising the Lord of Hell himself. For this reason, in it’s irrationality, the Roman Catholic Church banned the Devil’s Triad, on pain of excommunication. Today, of course, bands such as Metallica and Black Sabbath use the tritone on a regular basis with no adverse effects.
The irrational is a powerful force that may be harnessed, dear reader, by those willing play on the fringes of reality. Three magical phrases, irrational yet powerful and well known to the practitioners of the dark arts, can be invoked by anyone who desires to kill a technical project they dislike. Today, dear reader, I will teach you these three phrases.
But first, a word about motivations. According to Nietzsche, the driving force behind modern man’s desire for power is, tout court, resentment. We all resent the guy who comes in the middle of a software project and starts making suggestions about how to improve it. As the new guy, in turn, we resent the old and crusty way things are done, as if the way things are done is the only way. Resentment, in other words, is the mother of invention when it comes to technology, and we each, in our own way, embrace it as we strive toward a new tomorrow. In a perfect world, we may all act as the angels, but in the real world, we may occasionally be forced to make deals ex inferis. Which is not to recommend what I am about to teach you. I ask you, moreover, to use these techniques judiciously. One should not call upon the powers of the underworld lightly. But should you find yourself in a situation where rational discourse is no longer possible, and rhetorical brute force is required, then these phrases may be of use to you.
1. It’s too complex. It’s not maintainable.
This is a wonderful phrase. It is universally applicable since any useful piece of code will end up being complex, and one can never overemphasize the incompetence of one’s peers when discussing maintainability. And with luminaries like Joel Spolsky and Jeff Atwood backing you, how can you go wrong? If you want to kill any technology — WCF, WPF, .NET Remoting, 3-tier architecture — just invoke this magic phrase and it will wither away.
2. It’s not scalable.
Amazingly enough, this diabolical mantra can be called upon without any evidence. No one will ever turn around and ask you to justify your claim — be it with a load tester or anything. Simply say these magic words and your enemies will cower before you. Anything cool — like reflection, say — will cause a certain amount of performance degradation. This is normal of course. In software there are always tradeoffs, and exchanging performance for other advantages such as robustness and decoupling are the norm. Unless, of course, you make trade offs impossible. The magic phrase “It’s not scalable” instantly makes any trade off seem impossible. It’s very well, after all, to lose 5 milliseconds on a transaction, but what happens when you have a gazillion transactions?!!! That’s 5 milli-gazillion units of time that you have cost the company, and time is money! That’s 5 milli-gazillion dollars you’ve cost the company! By golly, this solution is not scalable!
3. It will push us beyond our deadline.
“The solution you have provided is all well and good, and I mean neither to question your integrity nor your intelligence, but given the fact that it is not maintainable and not scalable, I fear that trying to implement it will push us beyond our deadline.” I’ve never worked on a project that wasn’t “time sensitive” and rarely on one that wasn’t needed “yesterday”. There’s no better way to kill an idea, even when it comes out of the mouth of someone who refuses to say definitively when a project will in fact be completed, than to say that it will push us past our deadline. I’ve seen this used when determining which architecture to use. I’ve even seen it used in determining which textbox control to use. If you ever find yourself in a position where you have an idea that is competing with someone else’s idea, you can quickly sweep your adversary’s idea aside by invoking this occult phrase: It will push us beyond our deadline.
Why are these magic phrases never tested? Why are they impervious to standards of verifiability traditionally expected in other fields? The reason is simple. Software development is always seen, from the outside, as a kind of magic, and any successful project has at its heart some secret sauce, some magic code, that makes it all possible.
This is the magic unicorn principle. At the heart of any successful application stands a magic unicorn. You feed it data, no matter how disorganized or moldy, and it comes out the other end a rainbow. Data in. Rainbows out. It’s beautiful in its simplicity.
In my next post, I will demonstrate how to build a DIRO magic assembly. Stay tuned …