Genetic Algorithms (GAs)

Genetic Algorithms (GAs) have first been investigated and developed by John Holland at the University of Chicago, starting in the 1960s. Roughly they organize a search process for the optimum of some function. In real life this amounts mostly to the goal of an improvement, making things at least better than before (cf. [110]:7).

The main alternative methods to which GAs usually are compared to are methods based on calculus, enumeration, or random search or walk.

While calculus based methods suffer from their local scope and from the necessity to have derivatives at hand the enumeration based methods are breaking down if the size of the data becomes too large or has too many dimensions. 'Pure' random search or walk is also qualified as to bee 'inefficient' in the long run (cf. Goldberg [110]:3ff).

But, under which conditions can a GA be 'better' than another method? Rawlins [250]:8 states it as follows: If an algorithm is to be more effective than random search, then 'prior knowledge' must be included in the choice of encoding....so too do GAs require some sort of knowledge to be built into the objective function.. Similar the final statement of Langdon and Poli at the end of their book [187]:220, where they sum up that the infinite search space of genetic programmg (GP) is in principle infinite and therefore it can only be searched fruitfully with a bias. This is directly connected with the manner how feedback is provided to the system (see below).

Thus, to understand the special properties of a GA which makes it comparatively better than other methods one has to dig into the machinery of a GA and provide some mathematical handling to do this.

Figure 3.1: Philosophy of Genetic Algorithms
\includegraphics[width=3.5in]{evolution2.eps}

As figure 3.1 can show have we to distinguish at least three dimensions when talking about GAs.



Subsections
Gerd Doeben-Henisch 2012-03-31