In 1950 Alan Turing, the famous WWll code-breaker and computer pioneer, proposed his test of whether computers can think: It you can't tell the difference between a computer responding to you and a person, the computer can think. The "Chinese Room" experiment is a variant of the Turing Test.
I've always thought that the question "Can people think?" is a more relevant one. And what is thinking anyway? Much of what passes for human thought is emotion, application of old mental templates, and intuition.
As if to prove this point, there are now Artificial Intelligence gurus who take positions on AI based on ideology. A quote from a piece by Halpern:
In discussing the “system” argument against his Chinese Room thought experiment, Searle once said, “It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.” The AI champions, in their desperate struggle to salvage the idea that computers can or will think, are indeed in the grip of an ideology: they are, as they see it, defending rationality itself. If it is denied that computers can, even in principle, think, then a claim is being tacitly made that humans have some special property that science will never understand—a “soul” or some similarly mystical entity. This is of course unacceptable to many scientists.
In the deepest sense, the AI champions see their critics as trying to reverse the triumph of the Enlightenment, with its promise that man’s mind can understand everything, and as retreating to an obscurantist, religious outlook on the world. They see humanity as having to choose, right now, between accepting the possibility, if not the actual existence, of thinking machines and sinking back into the Dark Ages.
Read the entire lengthy but solid update on AI, in The New Atlantis.