In the comments section of a blog I like to frequent, I have been pointed to an article in the International Herald about Pierre Bayard’s new book, How to Talk About Books You Haven’t Read.
Bayard recommends strategies such as abstractly praising the book, offering silent empathy regarding someone else’s love for the book, discussing other books related to the book in question, and finally simply talking about oneself. Additionally, one can usually glean enough information from reviews, book jackets and gossip to sustain the discussion for quite a while.
Students, he noted from experience, are skilled at opining about books they have not read, building on elements he may have provided them in a lecture. This approach can also work in the more exposed arena of social gatherings: the book’s cover, reviews and other public reaction to it, gossip about the author and even the ongoing conversation can all provide food for sounding informed.
I’ve recently been looking through some AI experiments built on language scripts, based on the 1966 software program Eliza, which used a small script of canned questions to maintain a conversation with computer users. You can play a web version of Eliza here, if you wish. It should be pointed out that the principles behind Eliza are the same as those that underpin the famous Turing Test. Turing proposed answering the question can machines think by staging an ongoing experiment to see if machines can imitate thinking. The proposal was made in his 1950 paper Computing Machinery and Intelligence:
The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be:
“My hair is shingled, and the longest strands are about nine inches long.”
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”
The standard form of the current Turing experiments is something called a chatterbox application. Chatterboxes abstract the mechanism for generating dialog from the dialog scripts themselves by utilizing a set of rules written in a common format. The most popular format happens to be an XML standard called AIML (Artificial Intelligence Markup Language).
What I’m interested in, at the moment, is not so much whether I can write a script that will fool people into thinking they are talking with a real person, but rather whether I can write a script that makes small talk by discussing the latest book. If I can do this, it should validate Pierre Bayard’s proposal, if not Alan Turing’s.