Turing Test/Talk

From Wikipedia

< Turing Test

HomePage | Recent changes | View source | Discuss this page | Page history | Log in |

Printable version | Disclaimers | Privacy policy

I think this bit is confusing: "The test for intelligence changes the question into whether the party answering the questions is a computer or a human." I had to read it several times to understand first that "the question" is meant to refer not to one question being asked of the computer but to a series of questions, and that they are now used to determine if the answerer is computer or human (instead of being used to determine gender). Is that what you meant or have I just misunderstood in a new way?

Also, how does this test distinguish true intelligence from recitation of part of a large list of predetermined answers? I can have someone read Magic 8-ball answers back and they sound human enough for awhile. I think the test itself could use further description. :-) Sorry if this has been discussed elsewhere already. --KQ


Much better, Axel, thanks! --KQ


The article describes and criticizes one interpretation of the Turing test. But that interpretation differs from what Turing actually said. The original paper by Turing didn't claim that this test is either necessary or sufficient for something to be "truly" intelligent. He didn't even consider what consitutes "true" intelligence to be an interesting question. What he actually claimed was:

  • Someday, machines will be able to pass the Turing test
  • Someday after that, most people will think of such machines as "intelligent". This is a prediction about human sociology and linguistics. It doesn't address whether the machines are "really" intelligent (whatever that means). It doesn't address whether people ought to consider those machines intelligent.

That first point is debatable. If the first prediction happens, then the second prediction may very well happen. Other interesting aspects of his paper include:

  • His test required the human to pretend to be the other gender
  • He defined "pass" as having a 5-minute conversation fooling a certain percentage of the population
  • He predicted both a date when this would be achieved, and the amount of memory required. The date was wrong, and the memory requirement now seems absurdly small.
  • He predicted that the best way to achieve this would be through machine learning. In the current AI community, both the symbolic and subsymbolic communities tend to agree that machine learning will be a very important part of large AI systems.

It might be useful to include some or all of this info in the article. --LC


You sound knowledgeable enough to do it. :-) I certainly am not. --KQ


Here is a quote from Turing's paper that might be of interest. It might even be worth quoting in the article: -LC

It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

That's much better. Thanks. I did change half of one sentence to clarify: AI researchers generally agree machine learning is important. They don't all agree that a machine can be built to pass the Turing test. -LC


I don't have the benefit of the text in front of me, but I believe that the lack of strength in Turing's claim can be partially explained by the beginning of the paper. Turing believes that society would reject any notion of the thinking machine, therefore proposes the test as a less emotional goal. People are still too hung up on the apparent biological components of cognition to attribute the possibility of thought to anything other than wetware.

That said, I think I'll work on a Chinese Room writeup. hello again kq--Eventi


Actually, he starts the article by explaining that the question "can machines think?" is meaningless. To mean something, you'd first have to define "think". There is no accepted definition, so you'd have to take a Gallup poll, which he says is absurd. Therefore, he creates a well-defined question instead: Can a machine fool 30% of the population in a 5-minute session of the immitation game?. That's a meaningful question. The only other comment he makes on "can a machine think" is this:

The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Turing, like many modern computer scientists, would have agreed with Dijkstra's famous quote: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Obviously, Searle would disagree, so I think it could be useful for you to write up the Chinese Room. -LC


I have to disagree that Turing would have agreed with Dijkstra's quote. After all, he did begin the paper with: I propose to consider the question, "Can machines think?" Since defining "think" is so hairy, he proposes another question to remove that obsticle. Even in your quote above, the implication is that humans are just not ready to accept the idea of a machine thinking.

To say that the original question is meaningless is to say that thought is meaningless. And it clearly isn't, though difficult to define. We know that we think, and can therefore to a lesser degree know that other people think. To an even lesser degree, we know that pigs, dolphins and dogs think. The degree of similarity between humans and machines is so great, that it's difficult (with our sense of the word) to attribute thought to a machine.

Another quote (from the beginning of section 6)

We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question, "Can machines think?" and the variant of it quoted at the end of the last section. We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connexion.

I do agree that Turing made no claim about what it means if a machine were to pass the Turing Test, but the implications are clear enough.

[Deleted text moved to Chinese_Room/Talk ]

Getting back to the 30% comment – that number is subject to manipulation. One must assume Turing would have agreed to use random distributions of individuals so that there is variability in human experience, education, IQ, etc. After all, ELIZA fooled Weizenbaum's secretary – and other nontechnical staff. See [1]. <>< tbc

ELIZA brings up a topic related to the Turing Test – chatterbots such as http://www.alicebot.org/. <>< tbc

Good point. Also, whoever writes up the Loebner prize page ought to point out that so far, all the contestants have been ELIZA-type programs. To my knowledge, no one working on serious natural language recognition has felt ready to enter yet. -LC


Humans: Gods attempt to pass the Turng test.