Survival: Finding a Path For Success

In a discussion Oct-27, 2010 introduced Vassili Hiller and Marcus Pfaff the idea to solve the problem of feedback by looking to the sequence of actions as that compound unit which leads from a 'start' to a 'goal'. In that case all actions between start and goal have to be 'weighted'. Translating this idea into the context of our model world of world1 with some ANIMAT$ ^{n}$ (with $ n > 1$) (cf. 4.8) we have to define the terms start and goal.

Figure 4.8: Case Study World1
\includegraphics[width=4.5in]{case_study_world1_4.5in.eps}

For this we have to assume, that an agent system S represents at every point of time a state $ q_{i} \in Q$. Thus one can write the process of an agent system S as a sequence of states $ \pi = \langle q_{0}, q_{1}, ...\rangle$. A start is then every state $ q$ with parameter $ \delta \in Real \& \delta > \theta \& \delta \in q$ and the 'predecessor' of $ q$ in $ \pi$ has the parameter $ \delta^{*}
< \theta$. Similarly one can define a goal as that state $ q$ with parameter $ \delta \in Real \& \delta > \theta \& \delta \in q$ and the 'successor' of $ q$ in $ \pi$ has the parameter $ \delta^{*} < \delta$. Thus while the raise of the drive triggers the 'start' does a lowering of the drive only mark a minimal success. A start-state followed by a finite sequence of states not being a goal-state and then a goal-state is called a successful behavior. Every successful behavior is 'part' of the overall agent process $ \pi$. Every partial process with a start state but no goal state is an unsuccessful behavior.

In the agent system ANIMAT$ ^{2}$ we assume the following dependency between the drive $ \delta_{1}$ and the energy $ E$ of the whole system:


$\displaystyle ENERGY E < \theta_{e}$ $\displaystyle THEN$ $\displaystyle DRIVE \delta > \theta_{d}$ (4.109)
  $\displaystyle ELSE$ $\displaystyle \delta \leq \theta_{d}$ (4.110)
$\displaystyle FOOD F > \theta_{f}$ $\displaystyle THEN$ $\displaystyle E = E + INCR$ (4.111)

Gerd Doeben-Henisch 2012-03-31