Acting Humanly: The Turing Test Approach in Artificial Intelligence

The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. 


Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the test he proposed is that the computer should be interrogated by a human via a teletype, and passes the test if the interrogator cannot tell if there is a computer or a human at the other end. Chapter 26 discusses the details of the test, and whether or not a computer is really intelligent if it passes. For now, programming a computer to pass the test provides plenty to work on. The computer would need to possess the following capabilities: 

0 natural language processing to enable it to communicate successfully in English (or some other human language);
<C> knowledge representation to store information provided before or during the interrogation; 
<) automated reasoning to use the stored information to answer questions and to draw new conclusions; 
<) machine learning to adapt to new circumstances and to detect and extrapolate patterns. 

Turing's test deliberately avoided direct physical interaction between the interrogator and the computer, because physical simulation of a person is unnecessary for intelligence. However, the so-called total Turing Test includes a video signal so that the interrogator can test the subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical objects "through the hatch." To pass the total Turing Test, the computer will need


<) computer vision to perceive objects, and 
(> robotics to move them about.

Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human comes up primarily when AI programs have to interact with people, as when an expert system explains how it came to its diagnosis, or a natural language processing system has a dialogue with a user. These programs must behave according to certain normal conventions of human interaction in order to make themselves understood. The underlying representation and reasoning in such a system may or may not be based on a human model.


No comments:

Powered by Blogger.