Category Archives: Ideological State Apparatuses

Come hear me speak about Mixed Reality at Dragon Con 2015

 dragoncon_logo

I’ve been invited by the Robotics and Maker Track to speak about near future technologies at Dragon Con this year. While the title of the talk is “Microsoft Kinect and HoloLens,” I’ll actually be talking more broadly about 3D sensors like Kinect and the Orbbec Astra, Virtual Reality with the Oculus Rift and HTC Vive as well as Augmented Reality with HoloLens and Magic Leap. I will cover how these technologies will shape our lives and potentially change our world over the next five years.

I am honored to have been asked to be a panelist at Dragon Con on technology I am passionate about and that has been a large part of my life and work over the past several years.

I should add that being a panelist at Dragon Con is a nerd and fan’s freakin’ dream come true for me. Insanely so. Hopefully I’ll be able to stay cool enough to get through all the material I have on our collective sci fi future.

oculus

I will cover each technology and the devices coming out in the areas of 3D sensors, virtual reality and augmented reality. I’ll discuss their potential impact as well as some of their history. I’ll delve into some of the underlying technical and commercial challenges that face each. I’ll bring lots of Kinect and Oculus demos (not allowed to show HoloLens for now, unfortunately) and will also provide practical advice on how to experience these technologies as a consumer as well as a developer in 2016.

kinect

My panel is on Sunday, Sept 6 at 2:30 in Savannah rooms 1, 2 and 3 in the Sheraton. Please come say hi!

 holo

The HoloCoder’s Resume

agile

In an ideal world, the resume is an advertisement for our capabilities and the interview process is an audit of those claims. Many factors have contributed to complicating what should be a simple process.

 

ihaventreadyourresume

The first is the rise of professional IT recruiters and the automation of the resume process. Recruiters bring a lot to the game, offering a wider selection of IT job candidates to hiring companies, on the one hand, and providing a wider selection of jobs to job hunters, on the other. Automation requires standardization, however, and this has led to an overuse of key search terms when matching candidates to positions. The process begins with job specs from the hiring company — which parenthetically often have little to do with the actual job itself and highlights the frequent disconnect between IT departments and HR departments. A naive job hunter would try to describe their actual experience, which typically will not match the job spec as written by HR. At this point the recruiter helps the job hunter modify the details of her resume to match the template provided by the hiring company by injecting and prioritizing key buzzwords into the resume. “I’m sorry but Lolita, Inc will never hire you unless you have synesthesia listed in your job history. You do have experience with synesthesia, don’t you?”

 

clusteredindex

All of this gerrymandering is required in order to get to the next step, the job interview. Unfortunately, the people doing the job interview have little confidence in the resume as a vehicle for accurately describing a candidate’s actually abilities. First of all, they know that recruiters have already gone over it to eliminate useful information and replace it with keywords instead. Next, the interviewers typically haven’t actually seen the HR job specs and do not understand what kind of role they are hiring for. Finally, none of the interviewers have any particular training in doing job interviews or any particular skill in ascertaining what a candidate knows. In short, the interviewer doesn’t know what he’s looking for and wouldn’t know how to get it if he did.

greatestweakness

A savvy interviewer will probably realize that he is looking for the sort of generalist that Joel Spolsky describes as “smart and gets things done,” but how do you interview for that? The tools the interviewer is provided with are not generic but instead highly specific technology skills. At some point, this impedance mismatch between technology specific interview questions on the one had and a desire to hire generalists on the other (technology, after all, simply changes too quickly to look for only one skillset) lead to an increased reliance on behavioral questions and eventually Google-style language games. Neither of these, it turns, out, particularly help in hiring good candidates.

polymorphism

Once we historically severed any attempt to match interview questions to actual skills, the IT interview process was allowed to become a free floating hermeneutic exercise. Abstruse but non-specific questions involving principles and design patterns have taken over the process. This has led to two strange outcomes. On the one hand, job applicants are now required to be fluent in technical information they will never actually use in their jobs. Literary awareness of ten year old blog posts by Martin Fowler are more important than actually knowing how to get things done. And if the job interviewer exhibits any self-awareness when he turns down a candidate for not being clear on the justified uses of the CQRS pattern (there are none), it will not be because the candidate didn’t know something important for the job but rather because the candidate was unwilling to play the software architecture language game, and anyone unwilling to play the game is likely going to be a poor cultural fit.

The other consequence of an increased reliance on abstruse and non-essential IT knowledge has been the rise of the Architect across the industry. The IT industry has created a class of software developers who cannot actually develop software but instead specialize in telling other people what is wrong with their code. The architect is a specialization that probably indicates a deviant phase in the software industry – but at the same time it is a natural outcome of our IT job spec – resume – interview process. The skills of a modern software architect – knowledge of abstruse information and jargon often combined with an inability to get things done – is what we currently look for through our IT cargo cult hiring rituals.

whencanyoustart

This distinction between the ritual of IT hiring and the actual goals of IT hiring become most apparent when we look for specific as opposed to generalist skills. We hire generalists to be on staff over a long period. We hire specialists to perform difficult but real tasks that can eventually be handed over to our generalists – when we need to get something specific done.

Which gets us to the point of this post. What are the skills we should look for when hiring for a HoloLens developer? And what are the skills a HoloLens developer should be highlighting on her resume?

At this point in time, when there is still no SDK generally available for the HoloLens and all HoloLens coders are working for Microsoft and under various NDAs, it is hard to say. Fortunately, important clues have been provided by the recent announcement of the first consulting agency dedicated to the HoloLens and co-founded by someone who has been working on HoloLens applications for Microsoft over the past year. The company Object Theory was just started by Michael Hoffman and Raven Zachary and they threw up a website to advertise this new venture.

Among the tasks involved in creating this sort of extremely specialized website is explaining what capabilities you offer. First, they offer experience since Hoffman has worked on several of the demos that Microsoft has been exhibiting at conferences and in promotional videos. But is this enough of a differentiator? What skills do they have to offer to a company looking to build a HoloLens application?

This is part of the fascination of their “Work” page. It cannot describe any actual work since the company just started and hasn’t technically done any technical work. Instead, it provides a list of capabilities that look amazingly like resume keywords – but different from any keywords you may have come across:

 

          • Entirely new Natural User Interfaces (NUI)
          • Surface reconstruction and object persistence
          • 3D Spatial HRTF audio
          • Mesh reduction, culling and optimization
          • Baked shadows and ambient occlusion
          • UV mapping
          • Optimized render shaders
          • Efficient WiFi connectivity to back-end services
          • Unity and DirectX
          • Windows 10 APIs

 

These, in fact, are probably the sorts of skills you should be putting on your resume – or learning about in order to put on your resume – if getting a job programming HoloLens is your goal.

The verso side of this coin is that the list can also be turned into a great set of interview questions for someone thinking of hiring for HoloLens development, for instance:

Explain the concept of NUI to me.

Tell me about your experience with surface reconstruction and object persistence.

What is 3D spatial HRTF audio and why is it important for engineering HoloLens apps?

What are mesh reduction, mesh culling and mesh optimization?

Do you know anything about baked shadows and ambient occlusion?

Describe how you would go about performing UV mapping.

What are optimized render shaders and when would you need them?

How does the HoloLens communicate with external services such as a database?

What are the advantages and disadvantages of developing in Unity vs DirectX?

Describe the Windows 10 APIs that are used in HoloLens application development.

 

Then again, maybe these questions are a bit too abstruse?

The Javascript Cafeteria

cafeteria, 1950

The Nobel laureate and author Isaac Bashevis Singer tells an anecdote about his early days in America and his first encounter with an American style cafeteria.  He saw lots of people walking around with trays of food but none of them paid him any attention.  He thought that this must be the world’s most devilish restaurant, full of waiters but none willing to seat him.

The current world of javascript libraries seems like that sometimes.  New libraries pop up all the time and the ones you might have used a few months ago have become obsolete while you had your back turned.  Additionally you have to find a way to pick through the dim sum cart of libraries to find the complete set you want to consume. 

But maybe dim sum cart is also a poor metaphor since you can get in trouble that way, trying to combine things that do the same thing like knockout and backbone, or angular and asp.net mvc (<—that was a joke! but not really).  It’s actually more like a prix fixe menu where you pick one item from the list of appetizers, one from the main courses and finally one from deserts.

This may seem a lot like the problem of the firehose of technology but there is a difference and a silver lining.  It used to be that if you didn’t jump on a technology when it first came out (and there was a bit of a gamble to this, as witnessed by the devs who jumped on Silverlight – mea culpa) you would just fall behind and have a very hard time ever becoming an expert.  In the contemporary web dev climate, you can actually wait a little longer and that library you never got around to learning will just disappear. 

Even better, if a library has already been out for a few months, you can simply strategically ignore it and pick the one that came out last week.  The impostor syndrome epidemic (seriously, it’s like a nightmare version of Spartacus with everyone coming forward and insisting they feel like a phony – man up, dawg) goes away since anyone, even the retiring Visual Cobol developer, can become an expert living on the bleeding edge with just a little bit of Adderall assisted concentration.  True, it also means each of us is now competing with precocious 16 year olds for salaries, but such is the way of things.

Obviously we can take for granted that we are using JSON rather than XML for transport, and REST rather than SOAP for calls.  XML and SOAP are like going to a restaurant and finding that the chef is still adding fried eggs or kale to his dishes – or even foam of asparagus. 

moto, chicago

Just choose one item from column A, then another from column B, and so on.  I can’t give you any advice – who has time to actually evaluate these libraries before they become obsolete.  You’ll have to just do a google search like everyone else and see what Jim-Bob or cyberdev2000 thinks about it – kindof like relying on Yelp to plck a restaurant.  Arrows below indicate provenance.

Appetizers (javscript libraries):
jquery
prototype

Corso Secundo (visual effects):
jquery ui – – -> jquery
bootstrap – – -> jquery
script.aculo.us – -> prototype

Soups and Salads (utility libraries):
underscore
lazy.js
Lo-Dash – -> underscore

Breeze

Amuse Bouche (templating):
{{mustache}}
handlebars.js -> {{mustache}}

Main Courses (model binding frameworks):
angularjs
backbone.js -> underscore
knockout.js
ember.js -> handlebars.js
marionette.js -> backbone.js
CanJs

Wine Pairings (network libraries):
node.js
edge.js -> node.js
Go

Sides:
CoffeeScript
bower -> node.js

Desserts (polyfills):
modernizr
Mozilla Brick
polymer

Actually, I can help a little.  If you ask today’s waiter to surprise you (and we’re talking July of 2014 here), he’d probably bring you Jquery, Lo-Dash, Angularjs, Go, bower, modernizr.  YMMV.

The Open Office and Panopticism

open office plan

The magazine Fast Company has recently been on a tear critiquing the modern “open office” design ubiquitous in white collar businesses.  Several studies have found that marginal improvements in communication are offset by stress and productivity loss due to noise and lack of privacy.  Satisfaction levels for people who work in offices with doors that close are significantly higher.

How did we get to this place?  Open office plans arose sometime in the late 90’s as a response to the jokes about cubicle culture and densification which dehumanized the office worker while squeezing every  square footage out of usable office space. Open plans were intended to be more humanizing and to encourage social interactions, bringing the serendipity of water cooler conversation to the worker’s desk simply by lowering the height of cubicle walls and introducing a few plants.

Cubicles, in their turn, were also once seen as a humanizing and egalitarian effort.  Instead of low-valued employees being doubled or tripled up in fluorescent-lighted rooms while high-valued employees got more desirable windowed private offices, cubicles broke down the divide and gave more or less the same amount of space to middle managers as well as the people under them (corner offices still go to executives).  Moreover, to the extent that metaphors make up the furniture of our minds, we collectively moved away from the notion of smoky closed rooms as the space where decisions were made and generally redesigned our workspaces to emphasize transparency and equality.

This general trend towards greater and greater openness is captured in the name: “open office”.  Like some dystopic novel or Orwellian word game, we have somehow been placed in a position of seeking out and realizing our own discontent.  With only a little exaggeration, it resembles Michel Foucault’s notion of fascism as a force the leads us “to desire the very thing that dominates and exploits us.”  Fortunately for us, we’re only talking here about furniture fascism and it’s only middle- and upper middle-class white collar workers who are standing in for the exploited masses.

Even office workers have the right to have Foucault speak for them, however.  Were Foucault to perform a genealogical \ archeological analysis of the problematic of the contemporary open plan office, it might go something like this:

The initial move involved a misdirection concerned with repression.  Middle managers were seen as repressing their employees with a feudal style architecture that crowded office workers into shared spaces while they were allowed the luxury of having their own space.  Because of the preponderance of this repressive hypothesis, the ur-father from Freud’s Civilization and Its Discontents, now embodied in the middle-manager, could only be brought down by giving everyone her own version of the manager’s office: the modern cubicle.

There are two sides to these sorts of power dynamics, though.  On their side, managers were driven to the new office plans by their own bad conscience and desire not to be seen as authoritarian figures – they, as much as anyone else, bought into the repressive hypothesis.  On the other hand, bureaucratic movement requires expediency and expertise to justify change – this was provided by consultants more than happy to explain the cost-cutting that would be afforded by replacing office walls with removable cubicle walls.  On top of this, they touted the benefits of being able to put up new cubicles or remove old ones in response to fluctuations in the workforce.

Dilbert

The argument from economic necessity led to something Scott Adams, the creator of Dilbert, identified as “densification.”  Over time and as if by a natural law, cubicles became increasingly smaller.  Because the change was gradual it was difficult to notice.  Nevertheless, the cost savings produced by “densification” – a cost savings eerily reminiscent of Marx’s analysis of surplus value – could be touted each quarter as middle managers and executives justified their own value to the company. 

When employees began to complain more regularly about densification as they stood around the water cooler, it was quickly observed that through a trompe l’oeil.  The gradual densification could no longer be plausibly denied once cubicle walls had reached the point where they were taller than they were wide.  This awareness of densification, it was discovered, could be resolved by simply making the walls shorter and consequently making the perspectival distortion caused by densification less obvious.  All one had to do then was bring in a few architects to pretty things up and provide an aesthetic explanation for the changes.

Hence was born the movement toward greater openness and collaboration – as well as the eventual removal of water coolers.  As by products of this transition, we also saw the introduction of headphones into the workplace, the rise of music players, the increase in the fortunes of Apple, the proliferation of online music streaming services and eventually the necessity of workplace broadband, now considered in some circles a human right,  to pump all this music into our headphones to drown out the conversations of our neighbors in the open office.

What caused all this to happen?  Recall that for Foucault the repressive hypothesis is at best false and at worse a misdirection.  Management did not get together and plan out a way to decrease productivity in exchange for less expensive office space – all while convincing workers that the workers were getting one over on management by being allowed to spend more time talking and avoiding hearing other people talking rather than working.

orderly rows

Instead Foucault identifies a general trend toward scientific regularity and the privileging of visual metaphors he identifies as the “empire of the gaze” and, eventually, “panopticism”.  Let’s try to make this plausible and show how it is relevant to the rise of the open office.

In his book Discipline and Punish, Foucault introduces the notion that modern civilization, built on firm scientific principles, has had regulation and observation built into it on a cultural level.  As an example, he cites the development of geometrical plans for the laying out of military camps starting in the 17th century.  Military manuals from that time spell out explicitly how camps were to be laid out, how far tents needed to be from one another, how high they must be, etc.  The goal of these standardized layouts was to make the entire camp visible and an easy object of surveillance from a given point of view.  More importantly, soldiers were made to know, by the layout of the camps that they themselves built, that their conduct was being observed by their superiors and that they needed to fall in line, so to speak.

“For a long time this model of the camp, or at least its underlying principle, was found in urban development, in the construction of working-class housing estates, hospitals, asylums, prisons, schools: the spatial ‘nesting’ of hierarchized surveillance … The camp was to the rather shameful art of surveillance what the dark room was to the great science of optics.

“A whole problematic then develops: that of an architecture that is no longer built simply to be seen (as with the ostentation of palaces), or to observe  the external space (cf. the geometry of fortresses), but to permit an internal, articulated and detailed control – to render visible those who are inside it; in more general terms, an architecture that would operate to transform individuals: to act on those it shelters, to provide a hold on their conduct, to carry the effects of power right to them, to make it possible to know them, to alter them.”

The fault in each of these geometries is the point of view required to perform surveillance.  It is a weakness in the system that constantly draws attention to itself as the observer.  When a soldier in the camp knows who is observing him – that is, whose opinion matters most – he can choose to be obsequious to his officer, to buddy up to his officer, to flatter him, to bribe him, and in other ways undermine the surveillance culture that is being developed.  In this sort of scenario, the soldier merely has to “act” as if he is behaving and only when he thinks someone is watching; whereas the true goal of a surveillance culture is to mold people to behave well all the time and to do this sincerely rather merely as an act.

panopticon

Foucault finds the architectural fulfillment of this managerial vision in something known as the Panopticon.  The Panopticon is a concept for a prison designed by Jeremy Bentham, the father of utilitarianism.  The idea behind it was to have a prison designed in a ring so that every prisoner was constantly being observed by other prisoners.  Additionally, there was a tower in the center of the ring that provided the only privacy available in the prison layout.  The tower housed guards, but inmates could never be sure how many were watching them at any time.  What is important in the design is that prisoners always feel as if they are being watched.  Under constant surveillance of this sort, it was hoped, would cause prisoners to behave morally and hence undergo rehabilitation through self-discipline as well as punishment.  The Panopticon would put them on their best behavior.

How does this apply to the open office?  Just as there are design patterns in architecture – patterns that repeat themselves to the point that technicians can use them as guides for architectural design – there are also patterns in civilization.  These patterns mark epochs in culture. Thomas Kuhn, when discussing scientific revolutions, called them “paradigms” – from which we get the overused term “paradigm shift” that, technically, describes the transitions between scientific epochs.

For Foucault, the cultural epoch we are currently living through is ultimately one guided by the notion of surveillance.  Surveillance patterns inform our managerial practices as well as our modes of self-governance as a nation, our architecture as well as how we do interior decorating, our city planning as well as how we raise our children.  Surveillance entertainment, more commonly known as “reality television”, is a media staple.  And of course, surveillance design patterns inform our office spaces.

In discussing living in a surveillance society, in this particular time and place, it feels overly heavy handed to even link to articles about Edward Snowden, WikiLeaks, NSA spying or project PRISM.   These are the design patterns of a world we have simply learned to accept as a matter of course.  It is worth reflecting, however, that Foucault worked through his insights on surveillance and panopticism in the 60’s and 70’s; Discipline and Punish, in which he laid out these observations, was published in 1975.

The picture at the top of this post is of the office I work in.  It is an open office plan.  There happen to be offices with doors for managers.  Their office walls, however, as well as their doors are made of glass.  This allows management to more easily observe us, just as it allows us to more easily watch management.  It is the fulfillment of panopticism because it has no area for guards whatsoever – everyone inhabits the empire of the gaze.

The greatest office design innovation here at work are the two tiny rooms designated for nursing mothers.  They are the most used spaces – not because we have that many nursing mothers but rather because they are the only places in the office where people can hide.  This requires correction.

One Thumb Drive To Rule Them All

thumbdrive

I currently have an HTC 8X windows phone on my desk which I think is one of the best smartphones on the market.  I also have a Surface tablet.   I have a fascinating little device called a Leap Motion sitting on my desk that detects finger gestures.  I also have three Kinect for Windows sensors arrayed around my desk in order to capture images from multiple directions, bullet time style.

The thing that is most precious to me, however, is the 16 Gig Lexar jump drive someone bought for my dev/design group.  It is the fastest USB flash drive currently available.  When I described it to my wife, she said she didn’t realize that thumb drives came in different speeds.  After thinking it over, I realized that before using the Lexar, I hadn’t realized it either.

Or to be more accurate, I realized vaguely in my lizard brain that some thumb drives are slower than others, but I had no idea that some were faster than others.

And above all the fast thumb drives, there’s the Lexar, which feels like it is instantaneous.  For example, a colleague recently needed a copy of Visual Studio 2012 while we were in Manhattan for a retail show.  I put the 1.5 Gig ISO on my Lexar jump drive and he brought his laptop to my hotel room to copy the file over.  He thought he could get the copying started, we’d go to dinner, and hopefully it would be done by the time dinner was over.  But practically before he’d even touched the Lexar to his USB port … ziiiiiiiiiiiiiiiiiip … it was over.  The ISO file was on his harddrive.

I have to admit that I now have a problem even letting someone else use the 16 Gig Lexar – even though it is communal property – because I’m not sure I’ll get it back.  People in our group are constantly asking for the plastic container where we keep our various jump drives … but of course we all know what they are really looking for is one of the two 16 Gig Lexars we own.  Honestly, it’s starting to be a problem, and I’m tempted to just throw these thumb drives into a volcano somewhere.  It causes nothing but friction and jealously on the team.

But at the same time, it is so beautiful and precious to me.  My colleague from New York was instantly won over and talked about the thumb drive for a half hour through dinner.  If you have a tech person you want to buy a nice present for – or if you are someone who needs a little self-care – treat yourself to something special.  They’re a little pricey, and even better than you can possibly imagine.

Today is the last day to innovate before tomorrow …

[This will be the last post before the Mayan apocalypse tomorrow.]

There have already been some very interesting blog posts on other sites predicting the trajectory of technology in 2013.  Worthy of special mention is this excellent overview from Frog Design as well as this one from PSFK.

An interesting feature of all these predictions is that they are an amalgamation of current business trends and futuristic American movies.  Sci-fi movies provide a direction while business (especially retail) provides the funding.  Think of it as a sort of merchandise-celuloidal complex creating our collective future.

The central flaw of practically all the predictions linked above is that they are heavily influenced by American science fiction.  American science fiction, however, is a mere shadow of and several decades behind Japanese science fiction.  I want to correct that today by basing my 2013 Technology Trends predictions on the advanced research occurring in the Japanese futuristic anime industry.

johnny9

1. Giant Robots – 2013 will finally see the arrival of giant robots.  These should more properly be thought of as Gundam or giant suits of armor rather than robots (in the US our pre-occupation with robotics has seriously undermined our edge in this technological frontier) but for the sake of brevity I’ll continue to refer to them as robots for now.

Suidobashi Heavy Industries put their first Mech up for sale earlier this year (youtube link).  Over the next year, we can expect to see giant robots only getting bigger and dropping in price as they go into mass production. 

You should definitely trade in your Prius for one of these rugged commuter vehicles.  Not only will you be able to walk right over most commuter traffic, but you’ll also find your daily commute is much more enjoyable and comfortable as the anti-grav features kick in.  Giant Robots are also good for settling disputes with your neighbors and with your home owner’s association.  Even in rest mode, they become interesting conversation pieces when placed on your front lawn.

You can see a future vision video (much like Google’s vision video for Project Glass) on how giant robots will be used in the near future here.

stargate

2. Wormholes – Created by a race of aliens known as The Ancients, the wormhole travel system was discovered by the US Airforce about fifteen years ago and will be declassified and integrated by the TSA into commercial aviation routes in 2013.  Layovers on Beta Pictoris b and Kepler-42c are imminent.

walkingdead

3. Zombies – The US Cloning program will face a setback in 2013.  For the past five years, all major political figures as well as Hollywood A-List celebrities have been cloned in order to assure the smooth transition of power in government and entertainment.  Have you ever wondered how George Clooney stays so young?  Cloning.

In 2013, however, impurities introduced into the manufacture of clones (currently managed by the Umbrella Corporation) will turn clones of US House members into voracious and infectious brain eaters.  The US Congress will quickly turn the American populous into a rabid, ugly and mindless horde incapable of rational thought and obeying only raw emotions and appetites.

Only those who never leave their homes or watch cable news will be safe.

IMG_1254

4. Tablets – I think tablets are going to be really big in 2013.  Over the past several years I’ve noticed a subtle trend in which cameras have been flattened out and had phone-calling capabilities added to them.  Why phone companies rather than camera companies are driving this is a mystery to me, but more power to them.  Between 2010 and today these cameras have been getting bigger and bigger and are now even touch-enabled!  In 2013, I predict the arrival of 22”, 32” and even 55” touch-enabled cameras called “tablets” that people can comfortably carry around with them in their cars (or in their giant robots).  These tablets can even double as mirrors or flashlights!

Concerning Old Books

There are few things sadder than a pile of old technical books. They live on dusty bookshelves and in torn cardboard boxes as testament to the many things we never accomplished in our lives. Some cover fads that came and went before we even had time to peruse their contents. Others cover supposedly essential topics we turned out to be able to program perfectly well without – topics like algebra, geometry and software methodology … [continued]

No Phone App Left Behind on Win8: A Proposal

winrt

As the Windows 8 tablet comes closer to reality, its success will depend on the amount of content it can provide out of the gate.  The Windows Phone Marketplace has tens of thousands of apps that should be leveraged to provide this content.  The main barrier to this is that the development stacks for Windows Phone and Windows 8 are significantly different.  A simple solution to bridge this gap is to enable Metro Tiles for Silverlight apps running in “classic” mode – something not currently easy to do on the Windows 8 platform.  Here is the background.

There has recently been a revival of chatter about the death of Silverlight revolving around the notion that Silverlight 5 will be the last version of the platform we are likely to see: http://www.zdnet.com/blog/microsoft/will-there-be-a-silverlight-6-and-does-it-matter/11180?tag=search-results-rivers;item2

At the same time, Hal Berenson has lain out an argument for moving the WinNT kernel (MinWin?) into Windows Phone 8, a suggestion backed up by Mary Jo Foley’s reporting that there is a Project Apollo to do something like this. 

The main argument against the claims that Silverlight is dead concern the fact that it is currently still at the heart of Windows Phone development.  If MinWin from the Windows 8 OS for tablets replaces the WinCE kernel on Windows Phones, however, what will be the fate of Silverlight then?

The 40,000 App Bulwark

The most important piece in this complex chess game Microsoft is playing with its various technology platforms — old, new and newer (remember when Silverlight was still bleeding edge just a few months ago?) – is neither at the kernel level nor at the API level nor even at the framework level.  The most important piece is the the app marketplace Microsoft successfully built around the Windows Phone.  In a game in which almost any move seems possible, those apps must be protected at all cost.  30,000 apps, most of them built using Silverlight, cannot be thrown away.

At the same time, Windows Phone is a side-game for Microsoft.  In order to succeed in the smart phone market, Microsoft merely has to place.  The number three spot allows Microsoft to keep playing.

The main event, of course, is the tablet market.  Windows Phone can even be considered just a practice run for the arena where Microsoft really sees its future at stake.  The tablet market is make or break for Microsoft and its flagship product – its cash cow – Windows.

Fragmenting the app market into Silverlight on Windows Phone and WinRT’s three development platforms on Windows 8 seems nothing short of disastrous.  Microsoft needs those 40,000 apps as they launch their new tablet platform.  Without apps, all the innovations that are going into Windows 8 are practically meaningless.

My colleague at Razorfish, Wells Caughey, has recently written about his efforts to create live tiles for “classic” apps on the Windows 8 Developer Preview: http://emergingexperiences.com/2011/11/leveraging-the-windows-8-start-screen/ .  It’s hacky but works and allows several dozen of our apps written in WPF, Silverlight and even Flash to run inside the Metro environment on Win8.

What we learned from the exercise is that Microsoft has the ability to allow live tiles for classic apps if it wants to.  It currently does this for the classic windows desktop which runs as an app from the Metro desktop. 

Were Microsoft to do this, they could easily gain 40,000 apps at the Windows 8 launch.  Silverlight for Phone apps are relatively easy to turn into regular Silverlight apps.  It could be made even easier. 

On top of that, developers already know how to write Metro style apps using WPF, Silverlight and other tools.  Even since the introduction of true multitouch capability in WPF 4 and multitouch controls for Silverlight WP7 development, this is what we have all been working on.

For the moment, however, Microsoft is still apparently pushing for people to learn their new development tools in order to program for Windows 8 Metro and Windows Phone developers are being advised to learn WinJS and the currently somewhat anemic WinRT Xaml platform in order to port their apps.

This is all well and good but why does Microsoft want to leave its greatest asset in the tablet market – its 40K phone apps – on the sideline when enabling live tiles for these apps immediately puts them back in the game?

[note: Microsoft just broke the 40K milestone, so references to “the 30K app bulwark” have been edited to reflect this.]

 

Revolution, Evolution, Visual Studio 2010 and Borges

epicycle

In his preface to The Sublime Object of Ideology Slavoj Zizek writes:

“When a discipline is in crisis, attempts are made to change or supplement its theses within the terms of its basic framework – a procedure one might call ‘Ptolemization’ (since when data poured in which clashed with Ptolemy’s earth-centered astronomy, his partisans introduced additional complications to account for the anomalies).  But the true ‘Copernican’ revolution takes place when, instead of just adding complications and changing minor premises, the basic framework itself undergoes a transformation.  So, when we are dealing with a self-professed ‘scientific revolution’, the question to ask is always: is this truly a Copernican revolution, or merely a Ptolemization of the old paradigm?”

In gaming circles, Zizek’s distinction between Ptolemization and Copernican revolution resembles the frequent debates about whether a new shooter or new graphics engine is merely an ‘evolution’ in the gaming industry or an honest-to-goodness ‘revolution’ – which terms are meant to indicate whether it is a small step for man or a giant leap for gamers.  When used as a measure of magnitude, however, the apposite noun is highly dependent on one’s perspective, and with enough perspective one can easily see any video game as merely a Ptolemization of Japanese arcade games from the 80’s.  (For instance, isn’t CliffyB’s Gears of War franchise — with all the underground battles and monsters jumping out at you — merely a refinement of Namco’s Dig Dug?)

When Zizek writes about Ptolemization and revolutions, he does so with Thomas Kuhn’s 1962 book The Structure of Scientific Revolutions as a backdrop.  Contrary to the popular conception of scientific endeavor as a steady progressive movement, Kuhn proposed that major breakthroughs in science are marked by discontinuities – moments when science simply has to reboot itself.  Professor Kuhn identifies three such ‘paradigm shifts’: the Copernican revolution, the displacement of phlogiston theory with the discovery of oxygen, and the discovery of X-rays.  In each case, according to Kuhn, our worldview changed, and those who came along after the change could no longer understand those who came before.

Thoughts of revolution were much on my mind at the recent Visual Studio 2010 Ultimate event in Atlanta, where I had the opportunity to listen to Peter Provost and David Scruggs of Microsoft talk about the new development tool – and even presented on some of the new features myself.  Peter pointed out that this was the largest overhaul of the IDE since the original release of Visual Studio .NET.  Rewriting major portions of the IDE using WPF is certainly a big deal, but clearly evolutionary.  There are several features that I think of as revolutionary, however, inasmuch as they will either change the way we develop software or, in some cases, because they are simply unexpected.

  • Intellitrace (aka the Historical Debugger) stands out as the most remarkable breakthrough in Visual Studio 2010.  It is a flight recorder for a live debug session.  Intellitrace basically logs callstack, variable, event, SQL call (as well as a host of other) information during debugging.  This, in turn, allows the developer to not only work forward from a breakpoint, but even work backwards through the process flow to track down a bug.  A truly outstanding feature is that, on the QA side with a special version of VS, manual tests can be configured to generate an Intellitrace log which can then be uploaded as an attachment to a TFS bug item.  When the developer opens up the new bug item, she will be able to run the Intellitrace log in order to see what was happening on the QA tester’s machine and walk through this recording of the debug session.  For more about Intellitrace, see John Robbins’ blog.
  • As I hinted at above, Microsoft now offers a fourth Visual Studio SKU called the Microsoft Test and Lab Manager (also available as part of Visual Studio 2010 Ultimate).  The key feature in MTLM, for me, is the concept of a Test Case.  A test case is equivalent to a use case, except that there is now tooling built around it (no more writing use cases in Word) and the test case is stored in TFS.  Additionally, there is a special IDE built for running test cases that provides a list of use case steps, each of which can be marked pass/fail as the tester manually works through the test case.  Even better, screenshots of the application can be taken at any time, and a live video recording can be made of the entire manual test along with the Intellitrace log described above.  All of this metadata is attached to the bug item which is entered in TFS along with the specs for the machine the tester is running on and made available to the developer who must eventually track down the bug.  The way this is explained is that testing automation up to this point has only covered 30% of the testing that actually occurs (mostly with automated unit tests).  MTLM covers the remaining 70% by providing tooling around manual testing – which is what most of good testing is about.  For more info, see the MTLM team blog.
  • Just to round out the testing features, there is also a new unit test template in Visual Studio 2010 called the Coded UI Test.  Creating a new unit test from this template will fire up a wizard that allows the developer to start a manual UI test which gets interpreted as coded steps.  These steps are gen’d into the actual unit test either as UI hooks or XY-coordinate mouse events depending on what is being tested.  Additionally, assertions can be inserted into the test involving UI elements (e.g. text) one expects to see in the app after a series of steps are performed.  The Coded UI Test can then be run like any other unit test through the IDE, or even added to the continuous build process.  Finally, successful use cases verified by a tester can also be gen’d into a Coded UI Test.  This may be more gee-wiz than actually practical, but simply walking through a few of these tests is fascinating and even fun.  For more, see this msdn documentation.
  • Extensibility – Visual Studio now has something called an Extension Manager that lets you browse http://visualstudiogallery.com/ and automatically install add-ins (or more properly, “extensions”).  This only works, of course, it people are creating lots of extensions for VS.  Fortunately, thanks to Peter’s team, a lot of thought has gone into the Visual Studio extensibility and automation model to make it both easier to develop extensions, compared to VS2008, but also much more powerful. Link.

gallery

  • Architecture Tools – Code visualization has taken a great step forward in Visual Studio 2010. You can now generate not only class diagrams, but also sequence diagrams, use case diagrams, component diagrams and activity diagrams right from the source code.  Even class diagrams have a number of visualization options that allow you to see how your classes work together, where to find possible bottlenecks, which classes are the most referenced and a host of other perspectives that the sort of people who like staring at class diagrams will love.  The piece I’m really impressed by is the generation of sequence diagrams from source code.  One right clicks on a particular method in order to get the generation started.  As I understand it, the historical debugger is actually used behind the scenes in order to provide flow information that is then analyzed in order to create the diagram.  I like this for two reasons.  First, I hate actually writing sequence diagrams.  It’s just really hard.  Second, it’s a great diagnostic tool for understanding what the code is doing and, in some cases, what it is doing wrong.

There is a story I borrowed long ago from the Library of Babel and forgot to return – I believe it was by Jorge Luis Borges – about a young revolutionary who leads a small band in an attempt to overthrow the current regime.  As they sneak up on the house of the generalissimo, the revolutionary realizes that the generalissimo looks like an older version of himself, sounds like an older version of himself, in fact is an older version of himself.  Through some strange loop in time, he has come upon his future self – his post-revolutionary self – and sees that he will become what he is attempting to overthrow.

This is the problem with revolutions — revolutions sometimes produce no real change.  Rocky Lhotka raised this specter in a talk he gave at the Atlanta Leading Edge User Group a few months ago; he suggested that even though our tools and methodologies have advanced by leaps and bounds over the past decade, it still takes just as long to write an application today as it did in the year 2000. No doubt we are writing better applications, and arguably better looking applications – but why does it still take so long when the great promise of patterns and tooling has always been that we will be able to get applications to market faster?

This is akin to the Scandal of Philosophy discussed in intellectual circles.  Why, after 2,500 years of philosophizing, are we no closer to answering the basic questions such as What is Virtue?  What is the good life?  What happens to us when we die?

[Abrupt Segue] – Visual Studio 2010, of course, won’t be answering any of these questions, and the resolution of whether this is a revolutionary or an evolutionary change I leave to the reader.  It does promise, however, to make developers more productive and make the task of developing software much more interesting.

What can one do with Silverlight: Part deux

Corey Schuman, Roger Peters and Mason Brown – whom many of you met at the Atlanta Silverlight Firestarter – have been under wraps for several months working on a project for IQ Interactive they repeatedly insisted they couldn’t tell me about.

Now that the beta of My Health Info on MSN has been published, not only do I finally get to see what they have been working on but I also get to share it with you.

My Health Info is an aggregator of sorts for personal medical information – a tool to help the user keep track of her personal medical history.  Unlike other portals that support widgets, however, this one is built using Silverlight.

My Health Info is an interesting alternative to the Ajax-based web portal solutions we typically see and serves as a good starting point for anyone looking to combine the “portal” concept with Silverlight technology.  The Silverlight animations as one navigates through the application are especially nice; they strike the appropriate balance between the attractive and the distracting – between cool and cloying.