Do computers think?

sheep

The online Stanford Encyclopedia of Philosophy has just published David Cole’s update to the entry on The Chinese Room Argument.

The thought problem was posed by John Searle almost 30 years ago and has been a lightening rod for discussions about theories of consciousness and AI ever since.

For those unfamiliar with it, the argument is not against the notion that machines in general can think – Searle believes that minds are built on biological machines, after all – but rather against certain projects in AI that attempt to use computational theories to try to explain consciousness.  Searle’s argument is that computational models are a dead end and that thinking machines must be investigated in a different (apparently “biological”) way.

Of course, if biology can be reduced to the computational model (for instance) then Searle’s argument may be applicable to all machines and we will have to search for consciousness elsewhere.

Here’s the crux of the argument, from the SEP entry:

“The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.”

If this sort of problem excites you, as it does me, then you may want to examine some of the articles about and around consciousness collected on David Chalmers’ website: http://consc.net/online .

4 thoughts on “Do computers think?

  1. I have long been a fan of the Chinese room argument. What I think is particularly interesting is the fact that Searle essentially presents a deconstruction of the epistemological foundation of cognitive science, even though he himself feels the deconstruction, and its founder, are a farce. Maybe old Derrida wasn’t so crazy after all, even if his books are a huge pain in the ass to read.

  2. I missed the class on Searle but managed to take an epistemology course during which we read Derrida. I somehow squeezed a five or ten page paper out of that read, but I didn’t take too kindly to feeling like everything I thought of as a foundation to my understanding of philosophy had been torn asunder.

    The Chinese Room argument is interesting, though. As long as I remain the one telling the computer what to do (rather than the computer telling me what to do), I think I’ll be fine :-).

  3. Derrida is indeed rather difficult to follow. It has always struck me as peculiar that he was much more popular (and taken rather more seriously) in the U.S. than he was in France.

    The Chinese Room argument has the virtue of being both succinct and easily comprehensible. Like Nagel’s ‘What is it Like to be a Bat’ and Frank Jackson’s ‘What Mary Knows’, however, it is a bit like a set up for a joke to which we never get the punchline.

    Perhaps that’s Derrida’s role in philosophical discourse — to provide us with the punchlines.

    James

  4. I think computers do think.

    They are always wrong when they dont perform and I always blame them when things go wrong……but I do know that they can’t be wrong really

    Sally J Small

Comments are closed.