In his preface to The Sublime Object of Ideology Slavoj Zizek writes:
“When a discipline is in crisis, attempts are made to change or supplement its theses within the terms of its basic framework – a procedure one might call ‘Ptolemization’ (since when data poured in which clashed with Ptolemy’s earth-centered astronomy, his partisans introduced additional complications to account for the anomalies). But the true ‘Copernican’ revolution takes place when, instead of just adding complications and changing minor premises, the basic framework itself undergoes a transformation. So, when we are dealing with a self-professed ‘scientific revolution’, the question to ask is always: is this truly a Copernican revolution, or merely a Ptolemization of the old paradigm?”
In gaming circles, Zizek’s distinction between Ptolemization and Copernican revolution resembles the frequent debates about whether a new shooter or new graphics engine is merely an ‘evolution’ in the gaming industry or an honest-to-goodness ‘revolution’ – which terms are meant to indicate whether it is a small step for man or a giant leap for gamers. When used as a measure of magnitude, however, the apposite noun is highly dependent on one’s perspective, and with enough perspective one can easily see any video game as merely a Ptolemization of Japanese arcade games from the 80’s. (For instance, isn’t CliffyB’s Gears of War franchise -- with all the underground battles and monsters jumping out at you -- merely a refinement of Namco’s Dig Dug?)
When Zizek writes about Ptolemization and revolutions, he does so with Thomas Kuhn’s 1962 book The Structure of Scientific Revolutions as a backdrop. Contrary to the popular conception of scientific endeavor as a steady progressive movement, Kuhn proposed that major breakthroughs in science are marked by discontinuities – moments when science simply has to reboot itself. Professor Kuhn identifies three such ‘paradigm shifts’: the Copernican revolution, the displacement of phlogiston theory with the discovery of oxygen, and the discovery of X-rays. In each case, according to Kuhn, our worldview changed, and those who came along after the change could no longer understand those who came before.
Thoughts of revolution were much on my mind at the recent Visual Studio 2010 Ultimate event in Atlanta, where I had the opportunity to listen to Peter Provost and David Scruggs of Microsoft talk about the new development tool – and even presented on some of the new features myself. Peter pointed out that this was the largest overhaul of the IDE since the original release of Visual Studio .NET. Rewriting major portions of the IDE using WPF is certainly a big deal, but clearly evolutionary. There are several features that I think of as revolutionary, however, inasmuch as they will either change the way we develop software or, in some cases, because they are simply unexpected.
- Intellitrace (aka the Historical Debugger) stands out as the most remarkable breakthrough in Visual Studio 2010. It is a flight recorder for a live debug session. Intellitrace basically logs callstack, variable, event, SQL call (as well as a host of other) information during debugging. This, in turn, allows the developer to not only work forward from a breakpoint, but even work backwards through the process flow to track down a bug. A truly outstanding feature is that, on the QA side with a special version of VS, manual tests can be configured to generate an Intellitrace log which can then be uploaded as an attachment to a TFS bug item. When the developer opens up the new bug item, she will be able to run the Intellitrace log in order to see what was happening on the QA tester’s machine and walk through this recording of the debug session. For more about Intellitrace, see John Robbins’ blog.
- As I hinted at above, Microsoft now offers a fourth Visual Studio SKU called the Microsoft Test and Lab Manager (also available as part of Visual Studio 2010 Ultimate). The key feature in MTLM, for me, is the concept of a Test Case. A test case is equivalent to a use case, except that there is now tooling built around it (no more writing use cases in Word) and the test case is stored in TFS. Additionally, there is a special IDE built for running test cases that provides a list of use case steps, each of which can be marked pass/fail as the tester manually works through the test case. Even better, screenshots of the application can be taken at any time, and a live video recording can be made of the entire manual test along with the Intellitrace log described above. All of this metadata is attached to the bug item which is entered in TFS along with the specs for the machine the tester is running on and made available to the developer who must eventually track down the bug. The way this is explained is that testing automation up to this point has only covered 30% of the testing that actually occurs (mostly with automated unit tests). MTLM covers the remaining 70% by providing tooling around manual testing – which is what most of good testing is about. For more info, see the MTLM team blog.
- Just to round out the testing features, there is also a new unit test template in Visual Studio 2010 called the Coded UI Test. Creating a new unit test from this template will fire up a wizard that allows the developer to start a manual UI test which gets interpreted as coded steps. These steps are gen’d into the actual unit test either as UI hooks or XY-coordinate mouse events depending on what is being tested. Additionally, assertions can be inserted into the test involving UI elements (e.g. text) one expects to see in the app after a series of steps are performed. The Coded UI Test can then be run like any other unit test through the IDE, or even added to the continuous build process. Finally, successful use cases verified by a tester can also be gen’d into a Coded UI Test. This may be more gee-wiz than actually practical, but simply walking through a few of these tests is fascinating and even fun. For more, see this msdn documentation.
- Extensibility – Visual Studio now has something called an Extension Manager that lets you browse http://visualstudiogallery.com/ and automatically install add-ins (or more properly, “extensions”). This only works, of course, it people are creating lots of extensions for VS. Fortunately, thanks to Peter’s team, a lot of thought has gone into the Visual Studio extensibility and automation model to make it both easier to develop extensions, compared to VS2008, but also much more powerful. Link.
- Architecture Tools – Code visualization has taken a great step forward in Visual Studio 2010. You can now generate not only class diagrams, but also sequence diagrams, use case diagrams, component diagrams and activity diagrams right from the source code. Even class diagrams have a number of visualization options that allow you to see how your classes work together, where to find possible bottlenecks, which classes are the most referenced and a host of other perspectives that the sort of people who like staring at class diagrams will love. The piece I’m really impressed by is the generation of sequence diagrams from source code. One right clicks on a particular method in order to get the generation started. As I understand it, the historical debugger is actually used behind the scenes in order to provide flow information that is then analyzed in order to create the diagram. I like this for two reasons. First, I hate actually writing sequence diagrams. It’s just really hard. Second, it’s a great diagnostic tool for understanding what the code is doing and, in some cases, what it is doing wrong.
There is a story I borrowed long ago from the Library of Babel and forgot to return – I believe it was by Jorge Luis Borges – about a young revolutionary who leads a small band in an attempt to overthrow the current regime. As they sneak up on the house of the generalissimo, the revolutionary realizes that the generalissimo looks like an older version of himself, sounds like an older version of himself, in fact is an older version of himself. Through some strange loop in time, he has come upon his future self – his post-revolutionary self – and sees that he will become what he is attempting to overthrow.
This is the problem with revolutions -- revolutions sometimes produce no real change. Rocky Lhotka raised this specter in a talk he gave at the Atlanta Leading Edge User Group a few months ago; he suggested that even though our tools and methodologies have advanced by leaps and bounds over the past decade, it still takes just as long to write an application today as it did in the year 2000. No doubt we are writing better applications, and arguably better looking applications – but why does it still take so long when the great promise of patterns and tooling has always been that we will be able to get applications to market faster?
This is akin to the Scandal of Philosophy discussed in intellectual circles. Why, after 2,500 years of philosophizing, are we no closer to answering the basic questions such as What is Virtue? What is the good life? What happens to us when we die?
[Abrupt Segue] – Visual Studio 2010, of course, won’t be answering any of these questions, and the resolution of whether this is a revolutionary or an evolutionary change I leave to the reader. It does promise, however, to make developers more productive and make the task of developing software much more interesting.