The Next Book

min_lib

The development community deserves a great book on the Kinect 2 sensor. Sadly, I no longer feel I am the person to write that book. Instead, I am abandoning the Kinect book project I’ve been working on and off over the past year in order to devote myself to a book on the Microsoft holographic computing platform and HoloLens SDK. I will be reworking the material I’ve so far collected for the Kinect book as blog posts over the next couple of months.

As anyone who follows this blog will know, my imagination has of late been captivated and ensorcelled by augmented reality scenarios. The book I intend to write is not just a how-to guide, however. While I recognize the folly of this, my intention is to write something that is part technical manual and part design guide, part math tutorial, part travel guide and part cookbook. While working on the Kinect book I came to realize that it is impossible to talk about gestural computing without entering into a dialog with Maurice Merleau-Ponty’s Phenomenology of Perception and Umberto Eco’s A Theory of Semiotics. At the same time, a good book on future technologies should also cover the renaissance in theories of consciousness that occurred in the mid-90’s and which culminated with David Chalmers’ masterwork The Conscious Mind. Descartes, Bergson, Deleuze, Guattari and Baudrillard obviously cannot be overlooked either in a book dealing with the topic of the virtual, though  I can perhaps elide a bit.

A contemporary book on technology can no longer stay within the narrow limits of a single technology as was common 10 or so years ago. Things move at too fast a pace and there are too many different ways to accomplish a given task that choosing between them depends not only on that old saw ‘the right tool for the job’ but also on taste, extended community and prior knowledge. To write a book on augmented reality technology, even when sticking to one device like the HoloLens, will require covering and uncovering to the uninitiated such wonderful platforms as openFrameworks, Cinder, Arduino, Unity, the Unreal Engine and WPF. It will have to cover C#, since that is by and large the preferred language in the Microsoft world, but also help C# developers to overcome their fear of modern C++ and provide a roadmap from one to the other. It will also need to expose the underlying mathematics that developers need to grasp in order to work in a 3D world – and astonishingly, software developers know very little math.

Finally, as holographic computing is a wide new world and the developers who take to it will be taking up a completely new role in the workforce, the book will have to find its way to the right sort of people who will have the aptitude and desire to take up this mantle. This requires a discussion of non-obvious skills such as a taste for cooking and travel, an eye for the visual, a grounding in architecture and an understanding of how empty spaces are constructed, a general knowledge of literary and social theory. The people who create the next world, the augmented world, cannot be mere engineers. They will also need to be poets and madmen.

I want to write a book for them.

Ash Thorp Shout Out

Ash Thorp is another alumnus of Atlanta’s ReMIX conference for code and design who has made it big (or, to be accurate, big-ger). Like Chris Twigg and 3Gear, who were profiled in the last post, we were fortunate to have Ash Thorp present at ReMIX a few years ago when it was still possible to get him.

Ash was recently named to the Verge 50, squeezed somewhere between Tim Cook and Matthew McConaughey. The Verge website’s Fifty of 2014 is their list of the “most important people at the intersection of technology, art, science and culture.”

Ash Thorp is a visual designer for movies, designing both the intros for movies as well as the overall look and feel of a film. His specialty is sci fi and superhero movies and you’ve seen his work everywhere from Prometheus to Ender’s Game and beyond. He first came to our attention because of his website where he lifted the curtain a bit and showed how film design is actually done. From this, we could start piecing together the similarities between what he did and the more standard graphic design typically done for digital and print.

Ash Thorp is also the host of the Collective Podcast, which are a series of open-ended and meandering conversations about design, life and the universe. It is an earnest attempt by creative professionals to connect the world with their work and to use their work as designers as a prism for understanding the world. There is nothing quite like it in my field, the software development world, and we are all the poorer for it.

He’s also done lots of other cool projects, like his homage to Ghost in the Shell, that are all about being creative and sharing inspiration without an underlying profit motive. He is constantly trying to share and change and mold and give which is as much a testament to his boundless energy as it is to his essentially giving spirit.

Here is the brilliant presentation Ash gave at the ReMIX Conference a few years ago, revealing his approach to … well … work, life and the universe.

The Future of Interface Technology – Ash Thorp from ReMIX South on Vimeo.

Congrats to NimbleVR

I had the opportunity to meet Rob Wang, Chris Twigg and Kenrick Kin of 3Gear several years ago when I was in San Francisco demoing retail experiences using the Microsoft Kinect and Surface Table at the 2011 Oracle OpenWorld conference. I had been following their work with stereoscopic finger and hand tracking with dual Kinects and sent them what was basically a fan letter and they were kind enough to send me an invitation to their headquarters.

At the time, 3Gear was co-sharing office space with several other companies in a large warehouse space. Their finger tracking technology blew me away and I came away with the impression that these were some of the smartest people I had ever met working with computer vision and the Kinect. After all, they’re basically all Phd’s with backgrounds at companies like Industrial Light and Magic and Pixar.

I’ve written about them several times on this blog and nominated them for the Kinect v2 preview progrram. I was extremely excited when Chris agreed to present at the ReMIX conference some friends and I organized in Atlanta a few years ago for designers and developers. Here is a video of Chris’s amazing talk.

Bringing ‘Minority Report’ to your Desk: Gestural Control Using the Microsoft Kinect – Chris Twigg from ReMIX South on Vimeo.

Since then, 3Gear have worked on the problem of finger and hand tracking on various commercial devices in multiple configurations. In October of 2014 the guys at 3Gear initiated a Kickstarter project for a sensor they had developed called Nimble Sense. Nimble Sense is a depth sensor built from commodity components that is intended to be mounted on the front of an Oculus Rift headset. It handles the difficult problem of providing a good input device for the VR system which has the obvious side-effect of preventing you from seeing your own hands.

The solution, of course, is to represent the interaction controller – in this case the user’s hands – in the virtual world itself. Leap Motion, which produces another cool finger tracking device, also is working on creating a solution for this. The advantage the 3Gear people have, of course, is that they have been working on this particular problem with particular expertise in gesture tracking – rather than merely finger tracking – as well as visualization.

After exceeding their original goal in pledges, 3Gear abruptly cancelled their kickstarter on December 11th and the official 3Gear.com website I have been going to for news updates about the company was replaced.

This is actually all good news. Nimble VR, a rebranding of 3Gear for the Nimble Sense project, has been purchased by Oculus (which in turn, you’ll recall, was purchased by Facebook several months ago for around $2 billion).

For me this is a Cinderella story. 3Gear / Nimble VR is an extremely small team of extremely smart people who have passed on much more lucrative job opportunities in order to pursue their dreams. And now they’ve achieved their much deserved big payday.

Congratulations Rob, Chris and Kenrick!

Minecraft 1.2.4: How to Change Your Skin

Like many fathers, after my son turned seven I regretfully no longer had any idea what he did from day to day.  To my surprise, I recently found out that my eleven year old son posts video tutorials to YouTube.  I’m extremely proud and just a little bit concerned.  Here is some of his work:

The Beatles Rock Band

beatles-yellow-submarine

Scott Hanselman, perhaps the current reigning rock star in the Microsoft development world with an incredibly popular blog, Computer Zen, has approximately 17.5 thousand followers on Twitter.  William Shatner, a television actor currently up for an Emmy, has 114 thousand followers.  Colin Meloy, lead singer of a band I like, The Decemberists, has 910 thousand Twitter minions.

As involved as I tend to be in the life-world of software development – and despite its significance in the technological transformation of business and society –  I sometimes have to admit that it is a bit marginal.  Not only are my rock stars different from other people’s.  They are also less significant in the grand scheme of things.  By contrast, the biggest rock stars in society are, in fact, rock stars.

While it would be nice if we treated our teachers, our doctors, our nurses like rock stars, I am actually missing President Obama’s speech on healthcare tonight in order to play the just released Beatles Rock Band with my family.  According to this glowing review in The New York Times, it is not only the greatest thing since sliced bread – it is possibly better.  [Warning: the phrases cultural watershed and transformative entertainment experience appear in the linked article.]

The game is indeed fun and traces out The Beatles’ careers if one plays in story mode.  We had in fact gotten to 1965 before my 12 year old noticed the chronology and exclaimed, “Oh my Gawd.  They are so old.  I thought they were from the 80’s or something.”

This got me thinking incoherently about the fickle nature of fame which quickly segued into a daydream about sitting in the green room after a concert while my roadies picked out groupies at the door to come in and engage me in stimulating conversation.

Sometime in the 1990’s my philosophy department was trying to lure Hubert Dreyfus, then the leading interpreter of poststructuralists like Derrida and Foucault in America, into our university.  Apparently everything was going swimmingly until the haggling started and we discovered that not only did he want the chairmanship of the department but he also wanted a 300K salary and merchandizing rights to any action figures based on his work.   300K is a lot of money in any profession, but it is an uber-rock star salary when you consider that most American academics supplement their meager incomes by selling real estate and Amway.  Negotiations quickly deteriorated after that.

I’m not saying, of course, that Hubert Dreyfus doesn’t deserve that kind of scratch.  He had his own groupies and everything.  The problem is simply that our society doesn’t value the kind of contributions to the common weal provided by Professor Dreyfus.

Perhaps a video game could change all that.  I could potentially see myself playing an XBOX game in which I kiss-butt as a graduate student (as I recall, I in fact did do that) in a foreign country, write a marginal dissertation, get a teaching position somewhere and then write a counter-intuitive thesis in a major philosophy journal (the kind with at least a thousand subscribers, maybe more) such as “Why Descartes was not a Cartesian”, “Why Spinoza was not a Spinozist”, “Why Plato was not a Platonist” (true, actually) or “Why Nietzsche was not a Nihilist” (at the beginner level).  With the success of that article, the player would then ditch his teaching position at a state college for a big-name university and gather graduate students around himself.  He would then promote his favorite graduate students to tenure track positions and they would in turn write glowing reviews of all the player’s books as well as teach them in all their classes.  It’s called giveback, and the game would be called Academic Rock Star.  I really could potentially see myself playing that game, possibly.

There are rock stars in every field, and one might offer suggestions for other titles such as Financial Rock Star, Accounting Rock Star, Presidential Candidate Rock StarMicrosoft Excel Rock Star, Blogging Rock Star.

Perhaps the reason Microsoft has not picked up on any of these ideas is because – just as we all secretly believe that we will one day be rich – we all secretly believe that becoming a rock star in our own industry or sub-culture is attainable.

No one really believes, however, that he can ever become like The Beatles.  Consequently we settle for the next best thing: pretending to be The Beatles in a video game.

The problem with computer literacy

basoliA recent post on Boing Boing is titled Paper and pencil better for the brain than software?  The gist of the article and its associated links is that software, in guiding us through common tasks, actually makes us dumber.  The Dutch psychologist Christof van Nimwegen has performed studies demonstrating the deleterious effects of being plugged-in.  From the post:

“Van Nimwegen says much software turns us into passive beings, subjected to the whims of computers, randomly clicking on icons and menu options. In the long run, this hinders our creativity and memory, he says.”

This certainly sounds right to me, from personal experience.  About a year ago, my company gave away GPS navigation devices as Christmas gifts to all the consultants.  The results are twofold.  On the one hand, we all make our appointments on time now, because we don’t get lost anymore.  On the other, we have all lost our innate sense of direction — that essential skill that got the species through the hunter-gatherer phase of our development.  Without my GPS, I am effectively as blind as a bat without echolocation.

In Charles Stross’s novel about the near future, Accelerando, this experience is taken a step further.  The protagonist Manfred Macx is at one point mugged on the street, and his connection to the Internet, which he carries around with him hooked up to his glasses, is taken away.  As a man of the pre-singularity, however, his personality has become so distributed over search engines and data portals that without this connection he is no longer able to even identify himself.  This is the nightmare of the technologically dependent.

Doctor van Nimwegen’s study recalls Plato’s ambivalence about the art of writing.  His mentor Socrates, it may be remembered, never put anything to writing, which he found inherently untrustworthy. Consequently all we know of Socrates comes by way of his disciple Plato.  Plato, in turn, was a poet who ultimately became distrustful of his own skills, and railed against it in his philosophical writings.  From the modern viewpoint, however, whatever it is that we lose when we put “living” thoughts down to writing, surely it is only through poetry that we are able to recover and sustain it.

It is through poetic imagery that Plato explains Socrates’s misgivings about letters in the Phaedrus:

At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

We can certainly see aspects of Manfred Macx’s experience of disorientation in our dependence on tools like Google and Wikipedia, which provide us all with the same degree of wisdom, or at least the same show of wisdom.  In tracking down the above quote about Theuth, I had to rely on a vague reminiscence that this memory passage occurred in either the Timaeus or the Phaedrus. I then used my browser search functionality to track down the specific paragraph.  Very handy, that search feature.  But how much more wonderful it would have been had I been able to call that up from my own theater of memory.

My only stand against the steady march of progress (from which I make my living, it should be remembered) is that I turn my spell-checker off when I write emails and articles.  A consulting manager recently chastised me for this practice, which he found error prone and somewhat irresponsible.  To this I could only reply, “but I already know how to spell.”  

I should have added, “…for now.”

Reflection

reflection

Like may others, I recently received the fateful email notifying me that Lutz Roeder will be giving up his work on .NET Reflector, the brilliant and essential tool he developed to peer into the internal implementation of .NET assemblies.  Of course the whole idea of reflecting into an assembly is cheating a bit, since one of the principles of OO design is that we don’t care about implementations, only about contracts.  It gets worse, since one of the main reasons for using .NET Reflector is to reverse engineer someone else’s (particularly Microsoft’s) code.  Yet it is the perfect tool when one is good at reading code and simply needs to know how to do something special — something that cannot be explained, but must be seen.

While many terms in computer science are drawn from other scientific fields, reflection appears not to be.  Instead, it is derived from the philosophical “reflective” tradition, and is a synonym for looking inward: introspection.  Reflection and introspection are not exactly the same thing, however.  This is a bit of subjective interpretation, of course, but it seems to me that unlike introspection, which is merely a turning inward, reflection tends to involve a stepping outside of oneself and peering at oneself.  In reflection, there is a moment of stopping and stepping back; the “I” who looks back on oneself is a cold and appraising self, cool and objective as a mirror.

Metaphors pass oddly between the world of philosophy and the world of computer science, often giving rise to peculiar reversals.  When concepts such as memory and CPU’s were being developed, the developers of these concepts drew their metaphors from the workings of the human mind.  The persistent storage of a computer is like the human faculty of memory, and so it was called “memory”.  The CPU works like the processing of the mind, and so we called it the central processing unit, sitting in the shell of the computer like a homunculus viewing a theater across which data is streamed.  Originally it was the mind that was the given, while the computer was modeled upon it.  Within a generation, the flow of metaphors has been reversed, and it is not uncommon find arguments about the computational nature of the brain based on analogies with the workings of computers.  Isn’t it odd that we remember things, just like computers remember things?

The ancient Skeptics had the concept of epoche to describe this peculiar attitude of stepping back from the world, but it wasn’t until Descartes that this philosophical notion became associated with the metaphor of optics.  In a letter to Arnauld from 1648, Descartes writes:

“We make a distinction between direct and reflective thoughts corresponding to the distinction we make between direct and reflective vision, one depending on the first impact of the rays and the other on the second.”

This form of reflective thought, in turn, also turns up in at an essential turning point in Descartes’ discussion of his Method, when he realizes that his moment of self-awareness is logically dependent on something higher:

“In the next place, from reflecting on the circumstance that I doubted, and that consequently my being was not wholly perfect, (for I clearly saw that it was a greater perfection to know than to doubt,) I was led to inquire whence I had learned to think of something more perfect than myself;”

Descartes uses the metaphor in several places in the Discourse on Method.  In each case, it is as if, after doing something, for instance doubting, he is looking out the corner of his eye at a mirror to see what he looks like when he is doing it, like an angler trying to perfect his cast or an orator attempting to improve his hand gestures.  In each case, what one sees is not quite what one expects to see; what one does is not quite what one thought one was doing.  The act of reflection provides a different view of ourselves from what we might observe from introspection alone.  For Descartes, it is always a matter of finding out what one is “really” doing, rather than what one thinks one is doing.

This notion of philosophical “true sight” through reflection is carried forward, on the other side of the channel, by Locke.  In his Essay Concerning Human Understanding, Locke writes:

“This source of ideas every man has wholly in himself; and though it be not sense, as having nothing to do with external objects, yet it is very like it, and might properly enough be called internal sense. But as I call the other Sensation, so I call this REFLECTION, the ideas it affords being such only as the mind gets by reflecting on its own operations within itself. By reflection then, in the following part of this discourse, I would be understood to mean, that notice which the mind takes of its own operations, and the manner of them, by reason whereof there come to be ideas of these operations in the understanding.”

Within a century, reflection becomes so ingrained in philosophical thought, if not identified with it, that Kant is able to talk of “transcendental reflection”:

“Reflection (reflexio) is not occupied about objects themselves, for the purpose of directly obtaining conceptions of them, but is that state of the mind in which we set ourselves to discover the subjective conditions under which we obtain conceptions.

“The act whereby I compare my representations with the faculty of cognition which originates them, and whereby I distinguish whether they are compared with each other as belonging to the pure understanding or to sensuous intuition, I term transcendental reflection.”

In the 20th century, the reflective tradition takes a peculiar turn.  While the phenomenologists continued to use it as the central engine of their philosophizing, Wilfred Sellars began his attack on “the myth of the given” upon which phenomenological reflection depended.  From an epistemological viewpoint, Sellars questions the implicit assumption that we, as thinking individuals, have any privileged access to our own mental states. Instead, Sellars posits that what we actually have is not clear vision of our internal mental states, but rather a culturally mediated “folk psychology” of mind that we use to describe those mental states.  In one fell swoop, Sellars sweeps away the Cartesian tradition of self-understanding that informs the cogito ergo sum.

In a sense, however, this isn’t truly a reversal of the reflective tradition but merely a refinement.  Sellars and his contemporary heirs, such as the Churchlands and Daniel Dennett, certainly provided a devastating blow to the reliability of philosophical introspection.  The Cartesian project, however, was not one of introspection, nor is the later phenomenological project.  The “given” was always assumed to be unreliable in some way, which is why philosophical “reflection” is required to analyze and correct the “given.”  All that Sellars does is to move the venue of philosophical reflection from the armchair to the laboratory, where it no doubt belongs.

A more fundamental attack on the reflective tradition came from Italy approximately 200 hundred years before Sellars.  Giambattista Vico saw the danger of the Cartesian tradition of philosophical reflection as lying in its undermining of the given of cultural institutions.  A professor of oratory and law, Vico believed that common understanding held a society together, and that the dissolution of civilizations occurred not when those institutions no longer held, but rather when we begin to doubt that they even exist.  On the face of it, it sounds like the rather annoying contemporary arguments against “cultural relativism”, but is actually a bit different.  Vico’s argument is rather that we all live in a world of myths and metaphors that help us to regulate our lives, and in fact contribute to what makes us human, and able to communicate with one another.  In the 1730 edition of the New Science, Vico writes:

“Because, unlike in the time of the barbarism of sense, the barbarism of reflection pays attention only to the words and not to the spirit of the laws and regulations; even worse, whatever might have been claimed in these empty sounds of words is believed to be just.  In this way the barbarism of reflection claims to recognize and know the just, what the regulations and laws intend, and endeavors to defraud them through the superstition of words.”

For Vico, the reflective tradition breaks down those civil bonds by presenting man as a rational man who can navigate the world of social institutions as an individual, the solitary cogito who sees clearly, and cooly, the world as it is.

This begets the natural question, does reflection really provide us with true sight, or does it merely dissociate ourselves from our inner lives in such a way that we only see what we want to see?  In computer science of course (not that this should be any guide to philosophy) the latter is the case.  Reflection is accomplished by publishing metadata about a code library which may or may not be true.  It does not allow us to view the code as it really is, but rather provides us a mediated view of the code, which is then associated with the code.  We assume it is reliable, but there is no way of really knowing until something goes wrong.

PBS Sprout and The Hated

caillou bob

So this is how I start the day when I work from home.  I wake up at 5 in the morning, after which I have about 3 hours before anyone else is up.  At 8, the kids start filtering down from upstairs, so I turn PBS Sprout on for them and move from the living room to my office.  PBS Sprout is PBS’s lineup of children’s shows, and our cable provider gives us On Demand access to the episodes, which allows the kids to watch their shows without commercials (oh yes, PBS does have commercials).  My children (at least the youngest) has a fondness for a bald toddler named Caillou.  According to the official site, "the Caillou website and television series features everyday experiences and events that resonate with all children."   I think most parents find him a bit disturbing — but not as disturbing as Teletubbies, of course.

Before Caillou came on today there was a brief intro for PBS Sprout, and in the background was an interesting rendition of Bob Marley’s Three Little Birds which brought back a flood of memories.  The version of the song played by PBS Sprout is by Elizabeth Mitchell.  No, not that Elizabeth Mitchell.  This Elizabeth Mitchell.

Elizabeth Mitchell is married to Daniel Littleton, and in fact Daniel and their son perform on that particular Marley track.  Dan Littleton, in turn, used to play in a punk rock band in Annapolis, Maryland where I went to college.  For my first few years on campus, I used to find chalk drawings all over the sidewalks of Annapolis of The Hated without knowing what it meant.  Then Dan Littleton ended up going to my college (he was a faculty brat, after all) and it all became clear.

Not only that, but I used to hang out with Mark Fisher, who had played guitar and vocals for The Hated, though by the time I met him he was wearing tweed jackets and translating Greek (I think I did the Philoctetes with him), so I never suspected.

And mutatis mutandis, now not only has Bob Marley been gentrified for daytime cartoons, but the founder of The Hated has helped to make it possible.  Is this what middle-age feels like? 

Hear for yourself.

Bob Marley and the Wailers 

Elizabeth Mitchell and family

Impedance Mismatch

impedance_mismatch

Impedance mismatch is a concept from electronics that is gaining some mindshare as an IT metaphor.  It occupies the same social space that cognitive dissonance once did, and works in pretty much the same way to describe any sort of discontinuity.  It is currently being used in IT to describe the difficulty inherent in mapping relational structures, such as relational databases, to object structures common in OOP.  It is shorthand for a circumstance in which two things don’t fit.

Broadening the metaphor a bit, here is my impedance mismatch.  I like reading philosophy.  Unfortunately, I also like reading comic books.  I’m not a full-blown collector or anything.  I pick up comics from the library, and occasionally just sit in the bookstore and catch up on certain franchises I like.  I guess that in the comic book world, I’m the equivalent of someone who only drinks after sun-down or only smokes when someone hands him a cigarette, but never actually buys a pack himself.  A parasite, yes, but not an addict.

The impedance mismatch comes from the sense that I shouldn’t waste time reading comics.  They do not inhabit the same mental world that the other things I like to read do. I often sit thinking that I ought be reading Schopenhauer, with whom I am remarkably unfamiliar for a thirty-something, or at least reading through Justin Smith’s new book on WCF Programming, but instead find myself reading an Astro City graphic novel because Rocky Lhotka recommended it to me.  The problem is not that I feel any sort of bad faith about reading comic books when I ought to be reading something more mature.  Rather, I fear that I am actually being true to myself.

A passage from the most recent New Yorker in an article by Jonathan Rosen nicely illustrates this sort of impedance mismatch:

Sometime in 1638, John Milton visited Galileo Galilei in Florence. The great astronomer was old and blind and under house arrest, confined by order of the Inquisition, which had forced him to recant his belief that the earth revolves around the sun, as formulated in his “Dialogue Concerning the Two Chief World Systems.” Milton was thirty years old—his own blindness, his own arrest, and his own cosmological epic, “Paradise Lost,” all lay before him. But the encounter left a deep imprint on him. It crept into “Paradise Lost,” where Satan’s shield looks like the moon seen through Galileo’s telescope, and in Milton’s great defense of free speech, “Areopagitica,” Milton recalls his visit to Galileo and warns that England will buckle under inquisitorial forces if it bows to censorship, “an undeserved thraldom upon learning.”

Beyond the sheer pleasure of picturing the encounter—it’s like those comic-book specials in which Superman meets Batman—there’s something strange about imagining these two figures inhabiting the same age.

The Aesthetics and Kinaesthetics of Drumming

 sheetmusic

Kant’s Critique of Judgment, also know as the Third Critique since it follows the first on Reason and the second on Morals, is a masterpiece in the philosophy of aesthetics.  With careful reasoning, Kant examines the experience of aesthetic wonder, The Sublime, and attempts to relate it to the careful delineations he has made in his previous works between the phenomenal and noumenal realms.  He appears to allow in the Third Critique what he denies us in the First: a way to go beyond mere experience in order to perceive a purpose in the world.  Along the way, he passes judgment on things like beauty and genius that left an indelible mark on the Romanticism of the 19th century.

Taste, like the power of judgment in general, consists in disciplining (or training) genius.  It severely clips its wings, and makes it civilized, or polished; but at the same time it gives it guidance as to how far and over what it may spread while still remaining purposive.  It introduces clarity and order into a wealth of thought, and hence makes the ideas durable, fit for approval that is both lasting and universal, and hence fit for being followed by others…

Kant goes on to say that where taste and genius conflict, a sacrifice needs be made on the side of genius.

in his First Critique, Kant discusses the "scandal of philosophy" — that after thousands of years philosophers still cannot prove what every simple person knows — that the external world is real.  There are other scandals, too, of course.  There are many questions which, after thousands of years, philosophers continue to argue over and, ergo, for which they have no definitive answers.  There are also the small scandals which give an aspiring philosophy student pause, and make him wonder if the philosophizing discipline isn’t a fraud and a sham after all, such as Martin Heidegger’s Nazi affiliation.  Here the question isn’t why he didn’t realize what every simple German should have known, since even the simple Germans were quite taken up with the movement.  What leaves a bad taste, however, is the sense that a great philosopher should have known better.

A minor scandal concerns Immanuel Kant’s infamous lack of taste.  When it came to music, he seems to have a particular fondness for martial music, das heist, marching bands with lots of drumming and brass.  He discouraged his students from learning to actually play music because he felt it was too time consuming.   We might say that in his personal life, when his taste and his genius came into conflict, Kant chose to sacrifice his taste.

I think I will, also.  In Rock Band, the drums are notoriously the most difficult instrument to play well.  It is also the faux instrument that most resembles the real thing, and it is claimed by some that if you become a good virtual drummer, you will also in the process become a good real drummer.  I’ve tried it but I can’t get beyond the Intermediate level.  I can sing and play guitar on hard, but the drums have a sublime complexity that exceed my abilities to cope.  With uncanny timing Wired magazine has come out with a walkthrough for the drums in Rock Band (h/t to lifehacker.com).  It mostly concerns working with the kick pedal and two alternative techniques, heel-up and heel-down (wax-on/wax-off?) for working with it.  It involves a bit of geometry and a lot of implicit physics.  I would have liked a little more help with figuring out the various rhythm techniques, but according to wired, I would get the best results by simply learning real drum techniques, either with an instructor or through YouTube. 

I wonder what Kant would say about that.