STRUCTURE OF ARTIFICIAL INTELLIGENT AGENTS
So far we have talked about agents by describing their behavior—the action that is performed
after any given sequence of percepts. Now, we will have to bite the bullet and talk about how
AGENTPROGRAM the insides work. The job of AI is to design the agent program: a function that implements
the agent mapping from percepts to actions. We assume this program will run on some sort of
ARCHITECTURE computing device, which we will call the architecture. Obviously, the program we choose has to be one that the architecture will accept and run. The architecture might be a plain computer, or
it might include special-purpose hardware for certain tasks, such as processing camera images or
filtering audio input. It might also include software that provides a degree of insulation between
the raw computer and the agent program, so that we can program at a higher level. In general,
the architecture makes the percepts from the sensors available to the program, runs the program,
and feeds the program's action choices to the effectors as they are generated.
The relationship
among agents, architectures, and programs can be summed up as follows:
agent = architecture + program
Most of this book is about designing agent programs, although Chapters 24 and 25 deal directly
with the architecture.
Before we design an agent program, we must have a pretty good idea of the possible
percepts and actions, what goals or performance measure the agent is supposed to achieve, and
what sort of environment it will operate in.5
These come in a wide variety. Figure 2.3 shows the
basic elements for a selection of agent types.
It may come as a surprise to some readers that we include in our list of agent types some
programs that seem to operate in the entirely artificial environment defined by keyboard input
and character output on a screen.
"Surely," one might say, "this is not a real environment, is
it?" In fact, what matters is not the distinction between "real" and "artificial" environments,
but the complexity of the relationship among the behavior of the agent, the percept sequence
generated by the environment, and the goals that the agent is supposed to achieve. Some "real"
environments are actually quite simple. For example, a robot designed to inspect parts as they
come by on a conveyer belt can make use of a number of simplifying assumptions: that the
lighting is always just so, that the only thing on the conveyer belt will be parts of a certain kind,
and that there are only two actions—accept the part or mark it as a reject.
In contrast, some software agents (or software robots or softbots) exist in rich, unlimited
domains.
Imagine a softbot designed to fly a flight simulator for a 747. The simulator is a
very detailed, complex environment, and the software agent must choose from a wide variety of
actions in real time. Or imagine a softbot designed to scan online news sources and show the
interesting items to its customers. To do well, it will need some natural language processing
abilities, it will need to learn what each customer is interested in, and it will need to dynamically
change its plans when, for example, the connection for one news source crashes or a new one
comes online.
Some environments blur the distinction between "real" and "artificial."
In the ALIVE
environment (Maes et al., 1994), software agents are given as percepts a digitized camera image
of a room where a human walks about. The agent processes the camera image and chooses an
action. The environment also displays the camera image on a large display screen that the human
can watch, and superimposes on the image a computer graphics rendering of the software agent.
One such image is a cartoon dog, which has been programmed to move toward the human (unless
he points to send the dog away) and to shake hands or jump up eagerly when the human makes
certain gestures.
No comments: