AWE Presentation

My colleague, Astrini Sie, and I delivered a talk called Porting HoloLens Apps to Other Platforms at AWE 2023. Astrini is an AI researcher, so we threaded AI and AR together to see what AI can teach AR.

Here is the synopsis and a link.

https://www.youtube.com/watch?v=BkE3y_uSfa8

“Although Microsoft has substantially withdrawn from its Mixed Reality and Metaverse ambitions, this left behind a sizable catalog of community built enterprise apps and games, as well as a toolset, the MRTK, on which they were developed. In this talk, we will walk you through the steps required to port a HoloLens app built on one of the MRTK versions to other platforms such as the Magic Leap 2 and the Meta Quest Pro. We hope to demonstrate that, due to clever engineering, whole ecosystems can be moved from one platform to other newer platforms, where they can continue to evolve and thrive.”

Simulations and Simulacra

In a 2010 piece for The New Yorker called Painkiller Deathstreak , the novelist Nicholson Baker reported on his efforts to enter the world of console video games with forays into triple-A titles such as Call of Duty: World at War, Halo 3: ODST, God of War III, Uncharted 2: Among Thieves, and Red Dead Redemption.

rdr1

“[T]he games can be beautiful. The ‘maps’ or ‘levels’—that is, the three-dimensional physical spaces in which your character moves and acts—are sometimes wonders of explorable specificity. You’ll see an edge-shined, light-bloomed, magic-hour gilded glow on a row of half-wrecked buildings and you’ll want to stop for a few minutes just to take it in. But be careful—that’s when you can get shot by a sniper.”

In his journey through worlds rendered on what was considered high-end graphics a decade ago, Nicholson discovered both the frustrations of playing war games against 13 year olds (currently they would be old enough to be stationed in Afghanistan) as well as the peace to be found in virtual environments like Red Dead Redemption’s Western simulator.

Red-Dead-Redemption

“But after an exhausting day of shooting and skinning and looting and dying comes the real greatness of this game: you stand outside, off the trail, near Hanging Rock, utterly alone, in the cool, insect-chirping enormity of the scrublands, feeling remorse for your many crimes, with a gigantic predawn moon silvering the cacti and a bounty of several hundred dollars on your head. A map says there’s treasure to be found nearby, and that will happen in time, but the best treasure of all is early sunrise. Red Dead Redemption has some of the finest dawns and dusks in all of moving pictures.”

I was reminded of this essay yesterday when Youtube’s algorithms served up  a video of Red Dead Redemption 2 (the sequel to the game Nicholson wrote about) being rendered in 8K on an NVidia 3090 graphics card with raytracing turned on.

The encroachment of simulations upon the real world, to the point that they not only look as good as the real world (real?) but in some aspects even better, has interestingly driven the development of the sorts of AI algorithms that serve these videos up to us on our computers. Simulations require mathematical calculations that cannot be done as accurately or as fast on standard CPUs. This is why hardcore gamers pay upwards of a thousand dollars for bleeding edge graphics cards that are specially designed to perform floating point calculations.

rdr2-4

These types of calculations, interestingly, are also required for working with large data sets for machine learning. The algorithms that steer our online interests, after all, are just simulations themselves, designed to replicate aspects of the real world in order to make predictions about what sorts of videos (based on a predictive model of human behavior honed to our particular tastes) are most likely to increase our dwell time on Youtube.

rdr2-6

Simulations, models and algorithms at this point are all interchangeable terms. The best computer chess programs may or may not understand how chess players think (this is a question for the philosophers). What cannot be denied is that they adequately simulate a master chess player that can beat all the other chess players in the world. Other programs model the stock market and tune them back into the past to see how accurate they are as simulations, then tune them into the future in order to find out what will happen tomorrow – at which point we call them algorithms. Like memory, presence and anticipation for us meatware beings, simulation, model and algorithm make up the false consciousness of AIs.

rdr2_8

Simulacra and Simulation, Jean Baudrillard’s 1981 treatise on virtual reality,  opens with an analysis of the George Luis Borges short story On Exactitude in Science, about imperial cartographers who strive after precision by creating ever larger and larger maps, until the maps eventually achieve a one-to-one scale, becoming exact even as they overtake their intended purpose.

rdr2-5

“The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory – precession of simulacra – it is the map that engenders the territory and if we were to revive the fable today, it would be the territory whose shreds are slowly rotting across the map. It is the real, and not the map, whose vestiges subsist here and there, in the deserts which are no longer those of the Empire, but our own. The desert of the real itself.”

I was thinking of Baudrillard and Borges this morning when, by coincidence, Youtube served up a video of comparative map sizes in video games. Even as rendering versimilitude has been one way to gauge the increasing realism of video games, the size of game worlds has been another. A large world provides depth and variety – a simulation of the depth and happenstance we expect in reality – that increases the immersiveness of the game.

rdr2-7

Space exploration games like No Man’s Sky and Elite Dangerous attempt to simulate all of known space as your playing ground, while Microsoft’s Flight Simulator uses data from Bing Maps to allow you to fly over the entire earth. In each case, the increased size is achieved by surrendering on detail. But this setback is temporary, and over time we will be able to match the extent of these simulations with detail, also, until the difference between the real and the model of the real is negligible.

rdr2-9

One of the key difficulties with VR adoption (and to some extent the superiority of AR) is the falling anxiety everyone experiences as they move around in virtual reality. The suspicion that there are hidden objects in the world that the VR experience does not reveal to us prevents us from being fully immersed in the game – except in the case of the highly popular horror genre VR games in which inspiring anxiety is a mark of success. As the movements continue to both increase the detail of our simulations of the real world – to the point of simulating the living room sofa and the kitchen cabinet – and expand the coverage of our simulations across the world so there is no surveillable surface that can escape the increasing exactness of our model, we will eventually overcome VR anxiety. At that point, we will be able to walk around in our VR goggles without ever being afraid of tripping over objects, because there will be a one-to-one correspondence between what we see and what we feel. AR and VR will be indistinguishable at that exacting point, and we will at last be able to tread upon the sands of the desert of the real.

Thank you Techorama Netherlands

amsterdam

At the beginning of October I was invited to deliver two sessions at Techorama Netherlands: one on Cognitive Services Custom Vision and one about the HoloLens and the Magic Leap One. This is one of the best organized conferences I’ve been to and the hosts and attendees were amazing. I can’t say enough good things about it.

The lineup was also great with Scott Guthrie, Laurent Buignon, Giorgio Sardo, Shawn Wildermuth, Pete Brown, Jeff Prosise, etc. It is what is known as a first tier tech conference. What was especially impressive is that this is also the first time Techorama Netherlands was convened.

I want to also thank my friend Dennis Vroegop for hosting me and showing me around on my first trip to the Netherlands. He and Jasper Brekelmans took a weekday off to give me the full Amsterdam experience. It was also great to have beers with Roland Smeenk, Alexander Meijers and Joost van Schaik. I’m not sure why there is so much mixed reality talent in the Netherlands but there you go.

Is Ethical AI Ethical?

Is ethical AI ethical? This is not meant to be a self-referential question as much as a question about semantics. Do we know what we mean when we talk about ethics? And, as a corollary,  can we practice ethical AI if we aren’t sure what we mean when we talk about ethics. (Whether we speak correctly about artificial intelligence is a matter we can examine later.)

Is it possible that ethics is one of those concepts we all think we understand well,  agree that it is important to understand in order to lead good lives, but don’t really have a clear grasp of.

If this is the case, how do we go about practicing ethical AI? What is the purpose of it? How do we judge whether our ethical standards regarding AI are sufficient or effective? What would effective AI ethics look like? And is the question of ethical AI one of those problems we need to develop AI in order to solve?

Toward an ethical AI, here are some passages I consider important:

“Everyone will readily agree that it is of the highest importance to know whether or not we are duped by morality.” — Emmanuel Levinas, Totality and Infinity

“What is happiness?

“To crush your enemies, to see them driven before you, and to hear the lamentations of their women!” – Conan the Barbarian

“Imagine that the natural sciences were to suffer the effects of a catastrophe. A series of environmental disasters are blamed by the general public on scientists. Widespread riots occur, laboratories are burnt down, physicists are lynched, books and instruments are destroyed. Finally a Know-Nothing political movement takes power and successfully abolishes science teaching in schools and universities, imprisoning and executing the remaining scientists. Later still there is a reaction against this destructive movement and enlightened people seek to revive science, although they have largely forgotten what it was….

“The hypothesis which I wish to advance is that in the actual world which we inhabit the language of morality is in the same state of grave disorder as the language of natural science in the imaginary world which I described. What we possess, if this view is true, are the fragments of a conceptual scheme, parts of which now lack those contexts from which their significance derived.”  — Alasdair MacIntyre, After Virtue

“A great-souled person, because he holds few things in high honor, is not someone who takes small risks or is passionately devoted to taking risks, but he is someone who takes great risks, and when he does take a risk he is without regard for his life, on the ground that it is not on just any terms that life is worth living.” – Aristotle, Nicomachean Ethics

“In the name of God, the Merciful, the Compassionate.

“Someone asked the eminent shaykh Abu ‘Ali b. Sina (may God the Exalted have mercy on him) the meaning of the Sufi saying, He who knows the secret of destiny is an atheist.  In reply he stated that this matter contains the utmost obscurity, and is one of those matters which may be set down only in enigmatic form and taught only in a hidden manner, on account of the corrupting effects its open declaration would have on the general public.  The basic principle concerning it is found in a Tradition of the Prophet (God bless and safeguard him): Destiny is the secret of God; do not declare the secret of God.  In another Tradition, when a man questioned the Prince of the Believers, ‘Ali (may God be pleased with him), he replied, Destiny is a deep sea; do not sail out on it.  Being asked again he replied, It is a stony path; do not walk on it.  Being asked once more he said, It is a hard ascent; do not undertake it.

“The shaykh said: Know that the secret of destiny is based upon certain premisses, such as 1) the world order, 2) the report that there is Reward and Punishment, and 3) the affirmation of the resurrection of souls.” — Avicenna, On the Secret of Destiny

“The greatest recent event—that “God is dead,” that the belief in the Christian God has ceased to be believable—is even now beginning to cast its first shadows over Europe. For the few, at least, whose eyes, whose suspicion in their eyes, is strong and sensitive enough for this spectacle, some sun seems to have set just now…” – F. Nietzsche, The Gay Science (1887)

The AI Ethics Challenge

A few years ago, CNNs were understood by only a handful of PhDs. Today, companies like Facebook, Google and Microsoft are snapping up AI majors from universities around the world and putting them toward efforts to consumerize AI for the masses. At the moment, tools like Microsoft’s Cognitive Services, Google Cloud Vision and WinML are placing this power in the hands of line-of-business software developers.

But with great power comes great responsibility. While being a developer even a few years ago really meant being a puzzle-solver who knew their way around a compiler (and occasionally did some documentation), today with our new-found powers it requires that we also be ethicists (who occasionally do documentation). We must think through the purpose of our software and the potential misuses of it the way, once upon a time, we anticipated ways to test our software. In a better, future world we would have ethics harnesses for our software, methodologies for ethics-driven-development, continuous automated ethics integration and so on.

Yet we don’t live in a perfect world and we rarely think about ethics in AI beyond the specter of a robot revolution. In truth, the  Singularity and the Skynet takeover (or the Cylon takeover) are straw robots that distract us from real problems. They are raised, dismissed as Sci Fi fantasies, and we go on believing that AI is there to help us order pizzas and write faster Excel macros. Where’s the harm in that?

So lets start a conversation about AI and ethics; and beyond that, ML and ethics, Mixed Reality and ethics, software consulting and ethics. Because through a historical idiosyncrasy it has fallen primarily on frontline software developers to start this dialog and we should not shirk the responsibility. It is what we owe to future generations.

I propose to do this in two steps:

1. I will challenge other technical bloggers to address ethical issues in their field.  This will provide a groundwork for talking about ethics in technology, which as a rule we do not normally do on our blogs. They, in turn, will tag five additional bloggers, and so on.

2. For one week, I will add “and ethicist” to my LinkedIn profile description and challenge each of the people I tag to do the same. I understand that not everyone will be able to do this but it will serve to draw attention to the fact that “thinking ethically” today is not to be separated from our identity as “coders”, “developers” or even “hackers”. Ethics going forward is inherent in what we do.

Here are the first seven names in this ethics challenge:

I want to thank Rick Barraza and Joe Darko, in particular, for forcing me to think through the question of AI and ethics at the recent MVP Summit in Redmond. These are great times to be a software developer and these are also dark times to be a software developer. But many of us believe we have a role in making the world a better place and this starts with conversation, collegiality and a certain amount of audacity.

The Great AI Awakening

bathers_and_whale

This is a crazy long but nicely comprehensive article by the New York Times on the current state of AI: The Great AI Awakening.

While lately I’ve been buried in 3D interfaces, I’m always faintly aware of the way 1D interfaces (Cortana Skills, Speech as a service, etc.) is another fruit of our recent machine learning breakthroughs (or more accurately refocus) and of how the future success of holographic displays ultimately involves making it work with our 1D interfaces to create personal assistants. This article helps connect the dots between these, at first, apparently different technologies.

It also nicely complements Memo Atken’s Medium posts on Deep Learning and Art, which Microsoft resident genius Rick Barraza pointed me to a while back:

Part 1: The Dawn of Deep Learning

Part 2: Algorithmic Decision Making, Machine Bias, Creativity and Diversity

There’s also a nice throw away reference in the Times article about the relationship between VR and Machine Learning which is a little less obscure if you already know Baudrillard’s Simulacra and Simulation which in turn depends on Jorge Luis Borges’s very short story On Exactitude In Science.

If you really haven’t the time though, which I suspect may be the case, here are some quick excerpts starting with Google’s AI efforts:

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.

 

When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.

 

The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.

Jabberwocky

 

Download SAPISophiaDemo.zip – 2,867.5 KB

 

Following on the tail of the project I have been working on for the past month, a chatterbox (also called a chatbot) with speech recognition and text-to-speech functionality, I came across the following excerpted article in The Economist, available here if you happen to be a subscriber, and here if you are not:

 

Chatbots have already been used by some companies to provide customer support online via typed conversations. Their understanding of natural language is somewhat limited, but they can answer basic queries. Mr Carpenter wants to combine the flexibility of chatbots with the voice-driven “interactive voice-response” systems used in many call centres to create a chatbot that can hold spoken conversations with callers, at least within a limited field of expertise such as car insurance.

This is an ambitious goal, but Mr Carpenter has the right credentials: he is the winner of the two most recent Loebner prizes, awarded in an annual competition in which human judges try to distinguish between other humans and chatbots in a series of typed conversations. His chatbot, called Jabberwacky, has been trained by analysing over 10m typed conversations held online with visitors to its website (see jabberwacky.com). But for a chatbot to pass itself off as a human agent, more than ten times this number of conversations will be needed, says Mr Carpenter. And where better to get a large volume of conversations to analyse than from a call centre?

Mr Carpenter is now working with a large Japanese call-centre company to develop a chatbot operator. Initially he is using transcripts of conversations to train his software, but once it is able to handle queries reliably, he plans to add speech-recognition and speech-synthesis systems to handle the input and output. Since call-centre conversations tend to be about very specific subjects, this is a far less daunting task than creating a system able to hold arbitrary conversations.

 

Jabberwacky is a slightly different beast than the AIML infrastructure I used in my project.  Jabberwacky is a heuristics based technology, whereas AIML is a design-based one that requires somebody to actually anticipate user interactions and try to script them.

All the same, it is a pleasant experience to find that one is serendipidously au courant, when one’s intent was to be merely affably retro.

SophiaBot: What I’ve been working on for the past month…

I have been busy in my basement constructing a robot with which I can have conversations and play games.  Except that the robot is more of a program, and I didn’t build the whole thing up from scratch, but instead cobbled together pieces that other people have created.  I took an Eliza-style interpreter written by Nicholas H.Tollervey (this is the conversation part) along with some scripted dialogs by Dr. Richard S. Wallace and threw it together with a Z-machine program written by Jason Follas, which allows my bot to play old Infocom games like Zork and The Hitchhiker’s Guide to the Galaxy.  I then wrapped these up in a simple workflow and added some new Vista\.NET 3.0 speech recognition and speech synthesis code so the robot can understand me.

I wrote an article about it for CodeProject, a very nice resource that allows developers from around the world to share their code and network.  The site requires registration to download code however, so if you want to play with the demo or look at the source code, you can also download them from this site.

Mr. Tollervey has a succint article about the relationship between chatterboxes and John Searle’s Chinese Box problem, which obviates me from responsibility for discussing the same.

Instead, I’ll just add some quick instructions:

 

The application is made up of a text output screen, a text entry field, and a default enter button. The initial look and feel is that of an IBX XT theme (the first computer I ever played on). This can be changed using voice commands, which I will cover later. There are three menus initially available. The File menu allows the user to save a log of the conversation as a text file. The Select Voice menu allows the user to select from any of the synthetic voices installed on her machine. Vista initially comes with “Anna”. Windows XP comes with “Sam”. Other XP voices are available depending on which versions of Office have been installed over the lifetime of that particular instance of the OS. If the user is running Vista, then the Speech menu will allow him to toggle speech synthesis, dictation, and the context-free grammars. By doing so, the user will have the ability to speak to the application, as well as have the application speak back to him. If the user is running XP, then only speech synthesis is available, since some of the features provided by .NET 3.0 and consumed by this application do not work on XP.

The appearance menu will let you change the look and feel of the text screen.  I’ve also added some pre-made themes at the bottom of the appearnce menu.  If, after chatting with SophiaBot for a while, you want to play a game, just type or say “Play game.”  SophiaBot will present you with a list of the games available (you can add more, actually, simply by dropping additional game files you find on the internet into the Program Files\Imaginative Universal\SophiaBot\Game Data\DATA folder (Jason’s Z-Machine implementation plays games that use version 3 and below of the game engine.  I’m looking (rather lazily) into how to support later versions.  You can go here to download more Zork-type games.  During a game, type or say “Quit” to end your session. “Save” and “Restore” keep track of your current position in the game, so you can come back later and pick up where you left off.

Speech recognition in Vista has two modes: dictation and context-free recognition. Dictation uses context, that is, an analysis of preceding words and words following a given target of speech recognition, in order to determine what word was intended by the speaker. Context-free speech recognition, by way of contrast, uses exact matches and some simple patterns in order to determine if certain words or phrases have been uttered. This makes context-free recognition particularly suited to command and control scenarios, while dictation is particularly suited to situations where we are simply attempting to translate the user’s utterances into text.

You should begin by trying to start up a conversation with Sophia using the textbox, just to see how it works, as well as her limitations as a conversationalist. Sophia uses certain tricks to appear more lifelike. She throws out random typos, for one thing. She also is a bit slower than a computer should really be. This is because one of the things that distinguish computers from people is the way they process information — computers do it quickly, and people do it at a more leisurely pace. By typing slowly, Sophia helps the user maintain his suspension of disbelief. Finally, if a text-to-speech engine is installed on your computer, Sophia reads along as she types out her responses. I’m not certain why this is effective, but it is how computer terminals are shown to communicate in the movies, and it seems to work well here, also. I will go over how this illusion is created below.

In Command\AIML\Game Lexicon mode, the application generates several grammar rules that help direct speech recognition toward certain expected results. Be forewarned: initially loading the AIML grammars takes about two minutes, and occurs in the background. You can continue to touch type conversations with Sophia until the speech recognition engine has finished loading the grammars and speech recognition is available. Using the command grammar, the user can make the computer do the following things: LIST COLORS, LIST GAMES, LIST FONTS, CHANGE FONT TO…, CHANGE FONT COLOR TO…, CHANGE BACKGROUND COLOR TO…. Besides the IBM XT color scheme, a black papyrus font on a linen background also looks very nice. To see a complete list of keywords used by the text-adventure game you have chosen, say “LIST GAME KEYWORDS.” When the game is initially selected, a new set of rules is created based on different two word combinations of the keywords recognized by the game, in order to help speech recognition by narrowing down the total number of phrases it must look for.

In dictation mode, the underlying speech engine simply converts your speech into words and has the core SophiaBot code process it in the same manner that it processes text that is typed in. Dictation mode is sometimes better than context-free mode for non-game speech recognition, depending on how well the speech recognition engine installed on your OS has been trained to understand your speech patterns. Context-free mode is typically better for game mode. Command and control only works in context-free mode.

Do Computers Read Electric Books?

In the comments section of a blog I like to frequent, I have been pointed to an article in the International Herald about Pierre Bayard’s new book,  How to Talk About Books You Haven’t Read.

Bayard recommends strategies such as abstractly praising the book, offering silent empathy regarding someone else’s love for the book, discussing other books related to the book in question, and finally simply talking about oneself.  Additionally, one can usually glean enough information from reviews, book jackets and gossip to sustain the discussion for quite a while.

Students, he noted from experience, are skilled at opining about books they have not read, building on elements he may have provided them in a lecture. This approach can also work in the more exposed arena of social gatherings: the book’s cover, reviews and other public reaction to it, gossip about the author and even the ongoing conversation can all provide food for sounding informed.

I’ve recently been looking through some AI experiments built on language scripts, based on the 1966 software program Eliza, which used a small script of canned questions to maintain a conversation with computer users.  You can play a web version of Eliza here, if you wish.  It should be pointed out that the principles behind Eliza are the same as those that underpin the famous Turing Test.  Turing proposed answering the question can machines think by staging an ongoing experiment to see if machines can imitate thinking.  The proposal was made in his 1950 paper Computing Machinery and Intelligence:

The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be:

“My hair is shingled, and the longest strands are about nine inches long.”

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The standard form of the current Turing experiments is something called a chatterbox application.  Chatterboxes abstract the mechanism for generating dialog from the dialog scripts themselves by utilizing a set of rules written in a common format.  The most popular format happens to be an XML standard called AIML (Artificial Intelligence Markup Language).

What I’m interested in, at the moment, is not so much whether I can write a script that will fool people into thinking they are talking with a real person, but rather whether I can write a script that makes small talk by discussing the latest book.  If I can do this, it should validate Pierre Bayard’s proposal, if not Alan Turing’s.