ARTIFICIAL INTELLIGENCE: GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING

Nature has a robust way of evolving successful organisms. The organisms that are ill-suited for an environment die off, whereas the ones that are fit live to reproduce. Offspring are similar to their parents, so each new generation has organisms that are similar to the fit members of the previous generation. If the environment changes slowly, the species can gradually evolve along with it, but a sudden change in the environment is likely to wipe out a species. Occasionally, random mutations occur, and although most of these mean a quick death for the mutated individual, some mutations lead to new successful species.



The publication of Darwin's The Origin of Species on the Basis of Natural Selection was a major turning point in the history of science. It turns out that what's good for nature is also good for artificial systems. Figure 20.15 shows the GENETIC-ALGORITHM, which starts with a set of one or more individuals and applies selection and reproduction operators to "evolve" an individual that is successful, as measured by a fitness function. There are several choices for what the individuals are. They can be entire agent functions, in which case the fitness function is a performance measure or reward function, and the analogy to natural selection is greatest. They can be component functions of an agent, in which case the fitness function is the critic. Or they can be anything at all that can be framed as an optimization problem. 

Since the evolutionary process learns an agent function based on occasional rewards (offspring) as supplied by the selection function, it can be seen as a form of reinforcement learning. Unlike the algorithms described in the previous sections, however, no attempt is made to learn the relationship between the rewards and the actions taken by the agent or the states of the environment. GENETIC-ALGORITHM simply searches directly in the space of individuals, with the goal of finding one that maximizes the fitness function. The search is parallel because each individual in the population can be seen as a separate search. It is hill climbing because we are making small genetic changes to the individuals and using the best resulting offspring. The key question is how to allocate the searching resources: clearly, we should spend most of our time on the most promising individuals, but if we ignore the low-scoring ones, we risk getting stuck on a local maximum. It can be shown that, under certain assumptions, the genetic algorithm allocates resources in an optimal way (see the discussion of /i-armed bandits in, e.g., Goldberg (1989)). 

Before we can apply GENETIC-ALGORITHM to a problem, we need to answer the following four questions:

• What is the fitness function? 
• How is an individual represented? 
• How are individuals selected? 
• How do individuals reproduce? 

The fitness function depends on the problem, but in any case, it is a function that takes an individual as input and returns a real number as output. In the "classic" genetic algorithm approach, an individual is represented as a string over a finite alphabet. Each element of the string is called a gene. In real DNA, the alphabet is AGTC (adenine, guanine, thymine, cytosine), but in genetic algorithms, we usually use the binary alphabet (0,1). Some authors reserve the term "genetic algorithm" for cases where the representationis a bit string, and use the term evolutionary programming when the representation is more complicated.


Other authors make no distinction, or make a slightly different one. The selection strategy is usually randomized, with the probability of selection proportional to fitness. That is, if individual X scores twice as high as Y on the fitness function, then X is twice as likely to be selected for reproduction than is Y. Usually, selection is done with replacement, so that a very fit individual will get to reproduce several times. Reproduction is accomplished by cross-over and mutation. First, all the individuals that have been selected for reproduction are randomly paired. 

Then for each pair, a cross-over point is randomly chosen. Think of the genes of each parent as being numbered from 1 to N. The cross-over point is a number in that range; let us say it is 10. That means that one offspring will get genes 1 through 10 from the first parent, and the rest from the second parent. The second offspring will get genes 1 through 10 from the second parent, and the rest from the first. However, each gene can be altered by random mutation to a different value, with small independent probability. Figure 20.16 diagrams the process. For example, suppose we are trying to learn a decision list representation for the restaurant waiting problem (see page 556). The fitness function in this case is simply the number of examples that an individual is consistent with. The representation is the tricky part. 

There are ten attributes in the problem, but not all of them are binary. It turns out that we need 5 bits to represent each distinct attribute/value pair:

00000 : Alternate^) 
00001 : -iAlternate(x)
10111 : WaitEstimate(x,Q-W) 
11000 : WaitEstimate(x, 10-30) 
11001 : WaitEstimate(x, 30-60) 
11010 : WaitEstimate(x,>60) 

We also need one bit for each test to say if the outcome is Yes or No. Thus, if we want to represent a k-DL with a length of up to t tests, we need a representation with t(5k +1) bits. We can use the standard selection and reproduction approaches. Mutation can flip an outcome or change an attribute. Cross-over combines the head of one decision list with the tail of another.

No comments:

Powered by Blogger.