Speech Recognition And Synthesis Managed APIs in Windows Vista: Part III

Voice command technology, as exemplified in Part II, is probably the most useful and most easy to implement aspect of the Speech Recognition functionality provided by Vista.  In a few days of work, any current application can be enabled to use it, and the potential for streamlining workflow and making it more efficient is truly breathtaking.  The cool factor, of course, is also very high.

Having grown up watching Star Trek reruns, however, I can’t help but feel that the dictation functionality is much more interesting than the voice command functionality.  Computers are meant to be talked to and told what to do, as in that venerable TV series, not cajoled into doing tricks for us based on finger motions over a typewriter.  My long-term goal is to be able to code by talking into my IDE in order to build UML diagrams and then, at a word, turn that into an application.  What a brave new world that will be.  Toward that end, the SR managed API provides the DictationGrammar class.

Whereas the Grammar class works as a gatekeeper, restricting the phrases that get through to the speech recognized handler down to a select set of rules, the DictateGrammar class, by default, kicks out the jams and lets all phrases through to the recognized handler.

In order to make Speechpad a dictation application, we will add the default DicatateGrammar object to the list of grammars used by our speech recognition engine.  We will also add a toggle menu item to turn dictation on and off.  Finally, we will alter the SpeechToAction() method in order to insert any phrases that are not voice commands into the current Speechpad document as text.  Create an local instance of DictateGrammar for our Main form, and then instantiate it in the Main constructor.  Your code should look like this:

	#region Local Members
		
        private SpeechSynthesizer synthesizer = null;
        private string selectedVoice = string.Empty;
        private SpeechRecognitionEngine recognizer = null;
        private DictationGrammar dictationGrammar = null;
        
        #endregion
        
        public Main()
        {
            InitializeComponent();
            synthesizer = new SpeechSynthesizer();
            LoadSelectVoiceMenu();
            recognizer = new SpeechRecognitionEngine();
            InitializeSpeechRecognitionEngine();
            dictationGrammar = new DictationGrammar();
        }
        

Create a new menu item under the Speech menu and label it “Take Dictation“.  Name it takeDictationMenuItem for convenience. Add a handler for the click event of the new menu item, and stub out TurnDictationOn() and TurnDictationOff() methods.  TurnDictationOn() works by loading the local dictationGrammar object into the speech recognition engine. It also needs to turn speech recognition on if it is currently off, since dictation will not work if the speech recognition engine is disabled. TurnDictationOff() simply removes the local dictationGrammar object from the speech recognition engine’s list of grammars.

		
        private void takeDictationMenuItem_Click(object sender, EventArgs e)
        {
            if (this.takeDictationMenuItem.Checked)
            {
                TurnDictationOff();
            }
            else
            {
                TurnDictationOn();
            }
        }

        private void TurnDictationOn()
        {
            if (!speechRecognitionMenuItem.Checked)
            {
                TurnSpeechRecognitionOn();
            }
            recognizer.LoadGrammar(dictationGrammar);
            takeDictationMenuItem.Checked = true;
        }

        private void TurnDictationOff()
        {
            if (dictationGrammar != null)
            {
                recognizer.UnloadGrammar(dictationGrammar);
            }
            takeDictationMenuItem.Checked = false;
        }
        

For an extra touch of elegance, alter the TurnSpeechRecognitionOff() method by adding a line of code to turndictation off when speech recognition is disabled:

        TurnDictationOff();

Finally, we need to update the SpeechToAction() method so it will insert any text that is not a voice command into the current Speechpad document.  Use the default statement of the switch control block to call the InsertText() method of the current document.

        
        private void SpeechToAction(string text)
        {
            TextDocument document = ActiveMdiChild as TextDocument;
            if (document != null)
            {
                DetermineText(text);
                switch (text)
                {
                    case "cut":
                        document.Cut();
                        break;
                    case "copy":
                        document.Copy();
                        break;
                    case "paste":
                        document.Paste();
                        break;
                    case "delete":
                        document.Delete();
                        break;
                    default:
                        document.InsertText(text);
                        break;
                }
            }
        }

        

With that, we complete the speech recognition functionality for Speechpad. Now try it out. Open a new Speechpad document and type “Hello World.”  Turn on speech recognition.  Select “Hello” and say delete.  Turn on dictation.  Say brave new.

This tutorial has demonstrated the essential code required to use speech synthesis, voice commands, and dictation in your .Net 2.0 Vista applications.  It can serve as the basis for building speech recognition tools that take advantage of default as well as custom grammar rules to build adanced application interfaces.  Besides the strange compatibility issues between Vista and Visual Studio, at the moment the greatest hurdle to using the Vista managed speech recognition API is the remarkable dearth of documentation and samples.  This tutorial is intended to help alleviate that problem by providing a hands on introduction to this fascinating technology.

Methodology and Methodism


This is simply a side by side pastiche of the history of Extreme Programming and the history of Methodism.  It is less a commentary or argument than simply an experiment to see if I can format this correctly in HTML.  The history of XP is drawn from Wikipedia.  The history of Methodism is drawn from John Wesley’s A Short History of Methodism.


 








Software development in the 1990s was shaped by two major influences: internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth as competitive business factors. Rapidly-changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development.


The Chrysler Comprehensive Compensation project was started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the persistence layer. They brought in Kent Beck, a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several issues they were having with their development process. He took this opportunity to propose and implement some changes in their practices based on his work with his frequent collaborator, Ward Cunningham.


The first time I was asked to lead a team, I asked them to do a little bit of the things I thought were sensible, like testing and reviews. The second time there was a lot more on the line. I thought, “Damn the torpedoes, at least this will make a good article,” [and] asked the team to crank up all the knobs to 10 on the things I thought were essential and leave out everything else. —Kent Beck


Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a kind of coach to instill the practices as habits in the C3 team. Information about the principles and practices behind XP was disseminated to the wider world through discussions on the original Wiki, Cunningham’s WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hyper-text system map on the XP website at “www.extremeprogramming.org” circa 1999.


Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ISBN 0-201-61641-6), spreading his ideas to a much larger, yet very receptive, audience. Authors in the series went through various aspects attending XP and its practices, even a book critical of the practices. Current state XP created quite a buzz in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins.


Extreme Programming Explained describes Extreme Programming as being:



  • An attempt to reconcile humanity and productivity
  • A mechanism for social change
  • A path to improvement
  • A style of development
  • A software development discipline

The advocates of XP argue that the only truly important product of the system development process is code (a concept to which they give a somewhat broader definition than might be given by others). Without code you have nothing.

Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. Code, say the exponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts.

 The high discipline required by the original practices often went by the wayside, causing certain practices to be deprecated or left undone on individual sites.


Agile development practices have not stood still, and XP is still evolving, assimilating more lessons from experiences in the field. In the second edition of Extreme Programming Explained, Beck added more values and practices and differentiated between primary and corollary practices.


In November, 1729, four young gentlemen of Oxford — Mr. John Wesley, Fellow of Lincoln College; Mr. Charles Wesley, Student of Christ Church; Mr. Morgan, Commoner of Christ Church; and Mr. Kirkham, of Merton College — began to spend some evenings in a week together, in reading, chiefly, the Greek Testament. The next year two or three of Mr. John Wesley’s pupils desired the liberty of meeting with them; and afterwards one of Mr. Charles Wesley’s pupils. It was in 1732, that Mr. Ingham, of Queen’s College, and Mr. Broughton, of Exeter, were added to their number. To these, in April, was joined Mr. Clayton, of Brazen-nose, with two or three of his pupils. About the same time Mr. James Hervey was permitted to meet with them; and in 1735, Mr. Whitefield.


The exact regularity of their lives, as well as studies, occasioned a young gentleman of Christ Church to say, “Here is a new set of Methodists sprung up;” alluding to some ancient Physicians who were so called. The name was new and quaint; so it took immediately, and the Methodists were known all over the University.


They were all zealous members of the Church of England; not only tenacious of all her doctrines, so far as they knew them, but of all her discipline, to the minutest circumstance. They were likewise zealous observers of all the University Statutes, and that for conscience’ sake. But they observed neither these nor anything else any further than they conceived it was bound upon them by their one book, the Bible; it being their one desire and design to be downright Bible-Christians; taking the Bible, as interpreted by the primitive Church and our own, for their whole and sole rule.


The one charge then advanced against them was, that they were “righteous overmuch;” that they were abundantly too scrupulous, and too strict, carrying things to great extremes: In particular, that they laid too much stress upon the Rubrics and Canons of the Church; that they insisted too much on observing the Statutes of the University; and that they took the Scriptures in too strict and literal a sense; so that if they were right, few indeed would be saved.


In October, 1735, Mr. John and Charles Wesley, and Mr. Ingham, left England, with a design to go and preach to the Indians in Georgia: But the rest of the gentlemen continued to meet, till one and another was ordained and left the University. By which means, in about two years’ time, scarce any of them were left.


In February, 1738, Mr. Whitefield went over to Georgia with a design to assist Mr. John Wesley; but Mr. Wesley just then returned to England. Soon after he had a meeting with Messrs, Ingham, Stonehouse, Hall, Hutchings, Kinchin, and a few other Clergymen, who all appeared to be of one heart, as well as of one judgment, resolved to be Bible-Christians at all events; and, wherever they were, to preach with all their might plain, old, Bible Christianity.


They were hitherto perfectly regular in all things, and zealously attached to the Church of England. Meantime, they began to be convinced, that “by grace we are saved through faith;” that justification by faith was the doctrine of the Church, as well as of the Bible. As soon as they believed, they spake; salvation by faith being now their standing topic. Indeed this implied three things: (1) That men are all, by nature, “dead in sin,” and, consequently, “children of wrath.” (2) That they are “justified by faith alone.” (3) That faith produces inward and outward holiness: And these points they insisted on day and night. In a short time they became popular Preachers. The congregations were large wherever they preached. The former name was then revived; and all these gentlemen, with their followers, were entitled Methodists.


In March, 1741, Mr. Whitefield, being returned to England, entirely separated from Mr. Wesley and his friends, because he did not hold the decrees. Here was the first breach, which warm men persuaded Mr. Whitefield to make merely for a difference of opinion. Those, indeed, who believed universal redemption had no desire at all to separate; but those who held particular redemption would not hear of any accommodation, being determined to have no fellowship with men that “were in so dangerous errors.” So there were now two sorts of Methodists, so called; those for particular, and those for general, redemption.

Methodology and its Discontents


A difficult aspect of software programming is that it is always changing.  This is no doubt true in other fields, also, but there is no tenure system in software development.  Whereas in academe, it is quite plausible to reach a certain point in your discipline and then kick the ladder out behind you, as Richard Rorty was at one point accused of doing, this isn’t so in technology.  In technology, it is the young who set the pace, and the old who must keep up.


The rapid changes in technology, with which the weary developer must constantly attempt to stay current, can be broken down into two types: there are advances in tools and advances in methodologies.  While there is always a certain amount of fashion determining which tools get used — for instance which is the better language, VB.Net or C#? –, for the most part development tools are judged based upon their effectiveness. This is not necessarily true concerning the methodologies of software development.  Which is odd since one would suppose that the whole point of a method is that it is a way of guaranteeing certain results — use method A, and B (fame, fortune, happiness) will follow.


There is even something of a backlash against methodism (or, more specifically, certain kinds of methodism) being led, interestingly enough, by tool builders.  For instance, Rocky Lhotka, the creator of the CSLA framework says:



I am a strong believer in responsibility-driven, behavioral, CRC OO design – and that is very compatible with the concepts of Agile. So how can I believe so firmly in organic OO design, and yet find Agile/XP/SCRUM to be so…wrong…?


I think it is because the idea of taking a set of practices developed by very high-functioning people, and cramming them into a Methodology (notice the capital M!), virtually guarantees mediocrity. That, and some of the Agile practices really are just plain silly in many contexts.


Joel Spolsky, the early Microsoft Excel Product Manager and software entrepreneur, has also entered the fray a few times.  Most recently, an blog post by Steve Yegge (a developer at Google) set the technology community on fire:



Up until maybe a year ago, I had a pretty one-dimensional view of so-called “Agile” programming, namely that it’s an idiotic fad-diet of a marketing scam making the rounds as yet another technological virus implanting itself in naive programmers who’ve never read “No Silver Bullet”, the kinds of programmers who buy extended warranties and self-help books and believe their bosses genuinely care about them as people, the kinds of programmers who attend conferences to make friends and who don’t know how to avoid eye contact with leaflet-waving fanatics in airports and who believe writing shit on index cards will suddenly make software development easier.
You know. Chumps.


The general tenor of these skeptical treatments is that Extreme Programming (and its umbrella methodology, Agile) is at best a put-on, and at worst a bit of a cult.  What is lacking in these analyses, however, is an explication of why programming methodologies like Extreme Programming are so appealing.  Software development is a complex endeavor, and its practitioners are always left with a sense that they are being left behind.  Each tool that comes along in order to make development easier always ends up making it actually more difficult and error prone (though also more powerful).  The general trend of software devlopment has consistently been not less complexity, but greater complexity.  This leads to an indemic sense of insecurity among developers, as well as a sense that standing still is tantamount to falling behind.  Developers are, of course, generally well compensated for this burden (unlike, for instance, untenured academics who suffer from the same constraints) so there is no sense in feeling sorry for them, but it should be clear, given this, why there is so much appeal in the notion that there is a secret method which, if they follow it correctly, will grant them admission into the kingdom and relieve them of their anxieties and doubts.


One of the strangest aspects of these methodologies is the notion that methods need evangelists, yet this is what proponents of these concepts self-consciously call themselves.  There are Extreme Programming evangelists, SCRUM evangelists, Rational evangelists, and so on who write books and tour the country giving multimedia presentations, for a price, about why you should be using their methodology and how it will transform you and your software projects. 


So are software methodologies the secular equivalent of evangelical revival meetings?  It would depend, I suppose, on whether the purpose of these methodologies is to make money or to save souls.  Let us hope it is the former, and that lots of people are simply making lots of money by advocating the use of these methodologies.

Metaphors and Software Development

“But the greatest thing by far is to have a mastery of metaphor. This alone cannot be imparted by another; it is the mark of genius, for to make good metaphors implies an eye for resemblances.”  — Aristotle, The Poetics

 



It is a trivial observation that software programmers use a lot of metaphors.  All professions are obliged to invoke metaphors in one way or another, but in software programming I think the program is so extensive that we are not even aware of the fact that creating metaphors is our primary job.  We are only ever immediately aware of the metaphorical character of our profession when we use similes to explain technical matters to our managers.  Such-and-such technology will improve our revenue stream because it is like a new and more reliable conduit for information. It can be combined with sundry-and-other technology, which acts as a locked box for this information.


When speaking with technical colleagues, in turn, we use a different set of metaphors when explaining unfamiliar technology.  We say that “Ajax is like Flash”, “Flash is like instant messaging”, “instant messaging is like talking to services in a SOA”, “SOA is like mainframe programming”, b is like c, c is like d, d is like e, and so on.  Every technology is explained in reference to another technology, which in turn is just a metaphor for something else.  But does explicating one metaphor by simply referencing another metaphor really explain anything?  Is there a final link in the chain that ultimately cascades meaning back up the referential chain?


I have occasionally seen these referential chains referred to as a prison house of language, though perhaps a house of mirrors would be more appropriate in this case.  We work with very complex concepts, and the only way to communicate them is through metaphors, even when, as in a fun house, the only points of reference we have for explaining these concepts are other concepts we find reflected in a reflection of a reflection.  This metaphorical character of our occupation, however, is typically hidden from us because we use a different set of terms to describe what we do.  We don’t typically speak about metaphors in our programming talk; instead we speak of abstractions.


Not only are abstractions explained in terms of other abstractions, but they are also typically abstractions for other abstractions.  In Joel Spolsky’s explication of “Leaky Abstractions” (which is, yes, also a metaphor) we discover that TCP is really an abstraction thrown over IP.  But IP is itself an abstraction of other technologies, which in turn may in themselves also involve further abstractions.  At what point do the abstractions actually end?  Is it when we get to the assembler language level?  Is it when we get to machine language?  Do we finally hit bottom when we reach the point of talking about electricity and circuit boards?


Again, I will posit (in a handwaving and unsubstantiated manner) that the main occupation of a software programmer is working with metaphors.  Taking this as a starting point, it is strange that in the many articles and discussion threads addressing the question, what makes a good programmer?, poetry is never brought up. Overlooking Aristotle’s professed opinion that a gift for metaphor is something that cannot be taught, we might assume that if it is indeed something that can be cultivated, a likely starting point is through the reading and writing of poetry.  It would be a pleasant change if in looking over technical resumes, we also starting looking for signs that prospective employees to BigTech.com also were published in poetry journals or participated in local poetry readings.


But perhaps that is asking for too much.  The only other profession in which metaphors are applied extensively is in politics.  Just as metaphors are called abstractions in programming, in politics they are called “framing”.  Behind the notion of framing in politics is the assumption that certain metaphors are simply more naturally appealing than others.  The proper metaphor will motivate one’s audiences emotions either toward one’s platform or against one’s opponents platform.  The mastery of metaphor in politics, consequently, entails being able to associate one’s own position with the most emotively powerful metaphor into which one can fit one’s position. 


One interesting aspect of the endeavor of framing is that a political metaphor is required to be fitting, that is the metaphor one uses to explain a given position or argument must be appropriate to that argument, and an absence of fitting will generally detract from the force of the metaphor.  That there is this question of fittingness provides two ways to characterize political metaphors.  There are metaphors that seem to naturally apply to a given circumstance, and hence garner for the person who comes up with such metaphors a reputation for vision and articulateness.  Then there are metaphors that are so powerful that it does not matter so much that the circumstance to which it is applied is not so fitting, in which case the person who comes up with such metaphors gains a reputation as a scheming framer.


Determining which is which, of course, generally depends on where one is standing, and in either case we can say that both are masters of metaphor in Aristotle’s sense.  However, because there is so much question about the integrity of metaphors in politics, it is tempting to eschew the whole thing.  As wonderful as metaphors are, politics tends to make everything dirty in the end.


Which leaves software programming as the only place where metaphors can be studied and applied in a disinterested manner.  In programming, the main purpose of our abstractions is not to move people’s emotions, but rather to clarify concepts, spread comprehension and make things work better.  It is the natural home not only for mathematicians, but for poets.

Giulio Camillo, father of the Personal Computer


I am not the first to suggest it, but I will add my voice to those that want to claim that Giulio Camillo built the precursor of the modern personal computer in the 16th century.  Claiming that anyone invented anything is always a precarious venture, and it can be instructive to question the motives of such attempts.  For instance, trying to determine whether Newton or Leibniz invented calculus is a simple question of who most deserves credit for this remarkable achievement. 


Sometimes the question of firsts is intended to reveal something that we did not know before, such as Harold Bloom’s suggestion that Shakespeare invented the idea of personality as we know it.  In making the claim, Bloom at the same time makes us aware of the possibility that personality is not in fact something innate, but something created.  Edmund Husserl turns this notion on its head a bit with his reference in his writings to the Thales of Geometry.  Geometry, unlike the notion of personality, cannot be so easily reduced to an invention, since it is eidetic in nature.  It is always true, whether anyone understands geometry or not.  And so there is a certain irony in holding Thales to be the originator of Geometry since Geometry is a science that was not and could not have been invented as such.  Similarly, each of us, when we discover the truths of geometry for ourselves, becomes in a way a new Thales of Geometry, having made the same observations and realizations for which Thales receives credit. 


Sometimes the recognition of firstness is a way of initiating people into a secret society.  Such, it struck me, was the case when I read as a teenager from Stephen J. Gould that Darwin was not the first person to discover the evolutionary process, but that it was in fact another naturalist named Alfred Russel Wallace, and suddenly a centuries long conspiracy to steal credit from the truly deserving Wallace was revealed to me — or so it had seemed at the time. 


Origins play a strange role in etymological considerations, and when we read Aristotle’s etymological ruminations, there is certainly a sense that the first meaning of a word will somehow provide the key to understanding the concepts signified by the word.  There is a similar intuition at work in the discusions of ‘natural man’ to be found in the political writings of Hobbes, Locke and Rousseau.  For each, the character of the natural man determines the nature of the state, and consequently how we are to understand it best.  For Hobbes, famously, the life of this kind of man is rather unpleasant.  For Locke, the natural man is typified by his rationality.  For Rousseau, by his freedom.  In each case, the character of the ‘natural man’ serves as a sort of gravitational center for understanding man and his works at any time. I have often wondered whether the discussions of the state of the natural man were intended as a scientific deduction or rather merely as a metaphor for each of these great political philosophers.  I lean toward the latter opinion, in which case another way to understand firsts is not as an attempt to achieve historical accuracy, but rather an attempt to find the proper metaphor for something modern.


So who invented the computer?  Was it Charles Babbage with his Difference Engine in the 19th century, or Alan Turing in the 20th with his template for the Universal Machine?  Or was it Ada Lovelace, as some have suggested, the daughter of Lord Byron and collaborator with Charles Babbage who possibly did all the work while Babbage receives all the credit?


My question is a simpler one: who invented the personal computer, Steve Jobs or Giulio Camillo.  I award the laurel to the Camillo, who was known in his own time as the Divine Camillo because of the remarkable nature of his invention.  And in doing so, of course, I merely am attempting to define what the personal computer really is — the gravitational center that is the role of the personal computer in our lives.


Giulio Camillo spent long years working on his Memory Theater, a miniaturized Vitruvian theater still big enough to walk into, basically a box, that would provide the person who stood before it the gift most prized by Renaissance thinkers: the eloquence of Cicero.  The theater itself was arranged with images and figures from greek and roman mythology.  Throughout it were Christian references intermixed with Hermetic and Kabalistic symbols.  In small boxes beneath various statues inside the theater fragments and adaptations of Cicero’s writings could be pulled out and examined.  Through the proper physical arrangment of the fantastic, the mythological, the philosophical and the occult, Camillo sought to provide a way for anyone who stepped before his theater be able to discourse on any subject no less fluently and eloquently than Cicero himself.


Eloquence in the 16th century was understood as not only the ability of the lawyer or statesman to speak persuasively, but also the ability to evoke beautiful and accurate metaphors, the knack for delighting an audience, the ability to instruct, and mastery of diverse subjects that could be brought forth from the memory in order to enlighten one’s listeners.  Already in Camillo’s time, mass production of books was coming into its own and creating a transformation of culture.  Along with it, the ancient arts of memory and of eloquence (by way of analogy we might call it literacy, today), whose paragon was recognized to be Cicero, was in its decline.  Thus Camillo came along at the end of this long tradition of eloquence to invent a box that would capture all that was best of the old world that was quickly disappearing.  He created, in effect, an artificial memory that anyone could use, simply by stepping before it, to invigorate himself with the accumulated eloquence of all previous generations.


And this is how I think of the personal computer.  It is a box, occult in nature, that provides us with an artificial memory to make us better than we are, better than nature made us.  Nature distributes her gifts randomly, while the personal computer corrects that inherent injustice.  The only limitation to the personal computer, as I see it, is that it can only be a repository for all the knowledge humanity has already acquired.  It cannot generate anything new, as such.  It is a library and not a university.


Which is where the internet comes in.  The personal computer, once it is attached to the world wide web, becomes invigorated by the chaos and serendipity that is the internet.  Not only do we have the dangerous chaos of viruses and trojan horses, but also the positive chaos of online discussions, the putting on of masks and mixing with the online personas of others, the random following of links across the internet that ultimately leads us to make new connections between disparate concepts in ways that seem natural and inevitable.


This leads me to the final connection I want to make in my overburdened analogy.  Just as the personal computer is not merely a box, but also a doorway to the internet, so Giulio Camillo’s Theater of Memory was tied to a Neoplatonic worldview in which the idols of the theater, if arranged properly and fittingly, could draw down the influences of the intelligences, divine beings known variously as the planets (Mars, Venus, etc.), the Sephiroth, or the Archangels.  By standing before Camillo’s box, the spectator was immediately plugged into these forces, the consequences of which are difficult to assess.  There is danger, but also much wonder, to be found on the internet.