The empirical results so far suggest that pure chance is working much better than genetic operators without fitness feedback.
In a next step we will compare GA with fitness feedback against pure chance. The problem with this comparison is that a fitness function presupposes the existence of a 'real' set of 'good values' to be able to be defined. It is an interesting question how to handle this problem on a theoretical meta level later, but here we need concrete examples.
We will start with the introductory example from above. There the
simple fitness function had been used. This function defines a
subset in the set of integers
. Every time the evolving system
produces as output an
the fitness function responds with
a fitness value
. The existence of this
fitness functions represents a certain kind of special
knowledge which is used for the construction of this fitness function.
Without this knowledge it would not be possible to define this fitness
function.
The interesting question now is whether the mechanisms of a GA
supported by a certain fitness function can improve in a way that they
work at least with the same 'efficiency' and 'correctness' as the pure
chance method. Here we make the following general assumption: a fitness
function belongs always to an environment
and every
acting system with
or without
a GA is
interacting with the structure
. Thus even
a system without genetic operators has to deal with a fitness function
because the fitness function represents the behavior of the
environment.
We will prepare the following experiment:
Gerd Doeben-Henisch 2012-03-31