The thought problem was posed by John Searle almost 30 years ago and has been a lightening rod for discussions about theories of consciousness and AI ever since.
For those unfamiliar with it, the argument is not against the notion that machines in general can think – Searle believes that minds are built on biological machines, after all – but rather against certain projects in AI that attempt to use computational theories to try to explain consciousness. Searle’s argument is that computational models are a dead end and that thinking machines must be investigated in a different (apparently “biological”) way.
Of course, if biology can be reduced to the computational model (for instance) then Searle’s argument may be applicable to all machines and we will have to search for consciousness elsewhere.
Here’s the crux of the argument, from the SEP entry:
“The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.”
If this sort of problem excites you, as it does me, then you may want to examine some of the articles about and around consciousness collected on David Chalmers’ website: http://consc.net/online .