Structure of Reactive Agent: A1_reactive_animat.sci

//**************************************************************
// File: A1_reactive_animat.sci
// Authors: Gerd Doeben-Henisch
// Version Start: May-18, 2010
//---------------------------------
// Last Change Nov-26, 2012, 19:30h
//**********************************************************
// Diary:
//******************************************************************
// CONTENT: Necessary code for an ANIMAT1-agent, the successor of the
// ANIMAT0-agent.
// The set of classifiers is 'static', but with the aid of the
// fitness function one can 'configure' the single classifiers
// to be able to be associated to certain 'solution paths'
// according to a task
//***********************************
// BEHAVIOR FUNCTION WITH CLASSIFIERS
//
// A CLASSIFIER has the structure
// [PERC, DUMMY, ACT, REW]
//
// PERC := External Sensory Pattern, 8 x 2Bits for the 8 surrounding cells and their propositions
// DUMMY := Not used any more
// ACT := Action
// REW := Collected Reward (measured in gained energy)
//

//CLASSIF0 with a Non-Move !

CLASSIF0 = [
'11##############'  '0' '1' '000'; 
'##11############'  '0' '2' '000'; 
'####11##########'  '0' '3' '000' ; 
'######11########'  '0' '4' '000' ; 
'########11######'  '0' '5' '000' ; 
'##########11####'  '0' '6' '000' ; 
'############11##'  '0' '7' '000' ; 
'##############11'  '0' '8' '000' ;
'00##############' '0' '1' '000' ; 
'##00############' '0' '2' '000' ; 
'####00##########' '0' '3' '000' ; 
'######00########' '0' '4' '000' ; 
'########00######' '0' '5' '000' ; 
'##########00####' '0' '6' '000' ; 
'############00##' '0' '7' '000' ; 
'##############00' '0' '8' '000';
]

//CLASSIF without a Non-Move !

CLASSIF = [
'11##############'  '0' '1' '000'; 
'##11############'  '0' '2' '000'; 
'####11##########'  '0' '3' '000' ; 
'######11########'  '0' '4' '000' ; 
'########11######'  '0' '5' '000' ; 
'##########11####'  '0' '6' '000' ; 
'############11##'  '0' '7' '000' ; 
'##############11'  '0' '8' '000' ;
'00##############' '0' '1' '000' ; 
'##00############' '0' '2' '000' ; 
'####00##########' '0' '3' '000' ; 
'######00########' '0' '4' '000' ; 
'########00######' '0' '5' '000' ; 
'##########00####' '0' '6' '000' ; 
'############00##' '0' '7' '000' ; 
'##############00' '0' '8' '000';
]
//Maximal reward is limited to 1.

MAXREWARD=1


//*************************************************
// ANIMAT STRUCTURE
// The ANIMAT has a POSITION = (Yanimat, Xanimat)
// and an ENERGY-state ENERGY TOTAL and ENERGY ACTUAL
// The set of CLASSIFIERS is the  PREWIRED KNOWLEDGE
// connecting perceived PROPERTIES; VITAL parameters
// with a proposed ACTION and accumulated REWARDS.
// The ENERGY level  is controlled
// by TIME and by ACTIVITY
// FOOD = adds the amount given in FOODIDX
// NONFOOD = adds a certain negativ amount.
// The threshold VTHRESHOL controls the VITAL state:
// if ABOVE threshold then VITAL=1, otherwise =0
// ACTDEPTH cntrols how many PREDEEDING actions are remembered
// for accumulating REW to actions

FOODIDX = 300 //(10)
NONFOODIDX = -1 // (11)

Xanimat = 3 //(1)
Yanimat = 5 //(2)
ENERGYTotal = FOODIDX //(3)
ENERGYActual = 0 //(4)
ENERGYInput = FOODIDX //(5) Start value, could be different; part of the VITAL dimension of the agent
VITAL = 1 //(6) If '1' then not hungry, if '0' then hungry, depending from threshold
OLDACTIONS=[] // (7) List of old actions, containing IDs of classifiers and fitness values
CLASSIF=CLASSIF0 // (8)
ACTDEPTH = 2 //(9)  How many preceding ations can be memorized. ACTDEPTH should be minimally as long as 
//the shortest successful path to a goal in a task


VTHRESHOLD = FOODIDX/2 // (12)
MATCHSET =[] //(13) 
ACTIONSET =[] //(14) That action which will be executed
FITNESSFLAG = 0 //(15)If a fitness value >0 occured this will be set to '1'; after checking it is set back to '0'

ANIMAT = list(Xanimat, Yanimat, ENERGYTotal, ENERGYActual, ENERGYInput, VITAL, OLDACTIONS, CLASSIF, ACTDEPTH,FOODIDX, NONFOODIDX, VTHRESHOLD, MATCHSET, ACTIONSET, FITNESSFLAG)



Gerd Doeben-Henisch 2013-01-14