To understand Genetic Algorithms we need a little bit of background. One important part of the field of Artificial Intelligence is methods for solving certain types of very difficult problems. It is not that these problems are difficult to describe, or even that we don't know what good solutions look like, but rather that there are so many possible solutions it is difficult to find the best solution (or even a pretty good solution).

This doesn't at first seem like its related to intelligence at all. The connection is that while tradition methods rely on complex math, AI techniques for problem solving rely on partially random but directed searches through the solution space. A solution space is the "space" or the set of possible solutions. In the case of a function of two variables, for example, it would really look like a space as it could be visualized as a landscape with hills where there are good solutions and valleys of poor solutions (or the reverse, it depends on if we are looking for the smallest or largest value of the function). The way the search is directed makes it seem intelligent in a way- it should be designed to spend more time looking at solutions similar to ones that its seen that are good while still searching as much of the solution space as possible.

These methods will often try to mimic a natural phenomenon that appear intelligent. This can be the swarming behavior of insects, the neurons of the brain, or, in our case, the force of natural selection. The important thing isn't actually to study intelligence (obviously selection is not really intelligent) but rather to study selection as a guiding force that produces complex structures that are adapted to their environment. We will start out much simpler than this at first, though.

The simplest method of this type would be to construct a random solution and see if it is any good. We could generate many random solutions and just remember which was the best. For example, imagine we wanted to find the input value for some function, f(x), for which it returns the largest result, and we are interested in values only between 0 and 10. We may know what the function f looks like, but we might not- it might be a "black box" that we put values into and get values out. If we generated 100 random numbers between 0 and 10, perhaps one would be very good, perhaps not. Maybe one time we try and we get a very good result and the next time we get a much worse. There is no guarantees and no consistency, but on the other hand this is a very simple method.

A slightly better approach is this- generate a random number as before, but then try to increase it by a little bit and see if that gets a better result. Then try to decrease it a little bit, and see if that is better than increasing. If we keep trying to increase or decrease, and we move which point we are at depending on which direction (increasing or deceasing) is better, than we will often get better values than if we just generated number randomly. The problem, however, is that we are just climbing to the closest good point- if it would be good to go down a little bit in value to find a place were the result is better than the current than this method will not be able to solve the problem. This method is called hill climbing as it is like climbing the nearest hill in the solution space (the closest maximum value).

There are many problems that have the right characteristics for this type of search- as long as the solutions can be written down in some nice form (like as a number, a list of numbers, a list of symbols, etc), we can measure how well a solution solves the problem, and there are many possible solutions (to many to try them all). Often solutions that are similar in their representation (in terms of numbers, solutions that are close to each other on the number line) perform similarly as solutions to the problem. Unfortunately this can mean that there are many groups of good solutions and it is hard to find the best solutions among them. In general it is difficult to know anything at all about the solution space. What we want is a method for search any type of solution space that does not make any assumptions about how the solutions are spaced, or anything else about the problem.

In the next post we will look at encoding solutions as lists and introduce the biological metaphor that guides the field of Genetic Algorithms.