Specifying Behavior

In this chapter we will describe the elaboration of a behavior model guided by the point of view of psychology.

Figure 3.1: User - Task - Interface Paradigm
\includegraphics[width=4.0in]{UserTaskInterface.eps}

As it is shown in figure 3.1 we assume as the minimally configuration at least one user (or group of users), which plans to solve at least one task. A task3.1 consists at least of one start state, eventually some intermediate states, and at least one goal state. A realistic task contains additionally at least one minimally solution path. Thus a user who wants to solve the task has to realize optimally the minimally solution path. Everything else is more 'expensive'. The usage of the new intended system interface (SI) should improve the solution of the task. On the side of the user we have to assume a minimally knowledge about the task, its essential properties, what has to be done, etc. Because the task as such is no real object in the world, the user can only realize a solution according to the task, if the problem, the task and its possible solution is sufficiently represented as knowledge within the user.

Figure 3.2: Learning Scenario with Maze, Learning System and Learning Curve
\includegraphics[width=4.0in]{LearningParadigm.eps}

Figure 3.2 shows in a nutshell the basic ingredients of task learning with knowledge. The example is taken from the EoIESS lecture from BaSys (see: http://www.uffmm.org/EoIESS-TH/gclt/index.html). It shows a simple maze (according to Tolmans paper from 1948[150]) where a rat (shown as '*') can find food (shown as 'F'). To solve this task the rat as learning system has to built up an internal representation of the problem called knowledge(see figure 3.3). When the rat shows an improvement in his learning curve with less and less errors in the time and less and less effort to find the food then it is assumed that the rat could organize internally some sufficient knowledge.

Figure 3.3: A Minimally Learning Classifier System
\includegraphics[width=4.0in]{MinimallyLearningClassifierSystem.eps}

One model of such an internal knowledge is shown in figure 3.3 and is here called a Minimally Learning Classifier System (MLCS)(again following the EoIESS lecture from BaSys (see: http://www.uffmm.org/EoIESS-TH/gclt/index.html). It is a variant of the learning classifier system which it has been introduced by John Holland (cf. [61], [62]) and has been further propagated besides others by Wilson (cf. Wilson 1994 [157] and Sigaud/ Wilson 2007 [123]). The key idea is that the knowledge is organized in a format where the perceptions are combined with the following actions, additionally 'weighted' by assumed fitness values. In the MLCS systems it is assumed that there is a kind of episodic store keeping the last n-many actions in a store. Every time an action receives some fitness values $ F$ this value is distributed through all the supporting actions and the new fitness values are added to the old ones. This simple mechanism allows the identification of those perception-action rules which can be connected to enable a path in the task graph.

Generally it holds that this kind of rule sets of the format IF-THEN is equivalent to any kind of automaton. Thus the encoding of computational processes in this way is at the maximum of what can be computed.

Thus it is here assumed as a first working hypothesis (WH1) that the user knowledge can be modeled with sets of IF-THEN rules.

The other working hypothesis (WH2) assumes that any kind of knowledge has to be learned. Therefor one has to secure that the intended users have to their disposal the intended knowledge.

The availability and the quality of presupposed knowledge can be tested.



Subsections
Gerd Doeben-Henisch 2012-12-14