The general outline of a learning systems is as described above as follows:
(9.1) | |||
(9.2) | |||
(9.3) | |||
(9.4) | |||
(9.5) | |||
(9.6) | |||
(9.7) |
As we already could see there are several possibilities to define interesting subsets of learning systems. Here we will mention only three.
Depending on the allowed operations within level-learning systems one can introduce many more distinctions.
The weakness of the above introduced classification follows from the architectural dependency of these definitions from a certain 'structure' of the learning systems (having rules or neurons labeled by fitness values).
A more general classification would solely based on characteristic task sets like those used in psychological intelligence tests.
To judge that a system 'can learn' is usually given from the 'outside' of a system based on the observable behavior of the system. The behavior is based on perceivable stimuli from the environment and the possible responses of the system into the environment. Relying on the behavior makes the term 'learning' invariant against the kind of observed system: plants, animals, robots, humans or something else.
Such an approach to learning is rooted in the psychology of the beginning of the 20th century connected to names like J.B.Watson (1878 - 1958) (cf. [408], [410]), Edwin R.Guthrie (1886 - 1959) (cf. [138], [140]), Clark L.Hull (1884 - 1952) (cf.[169], [170] ), Edward C.Tolman (1886 - 1961)(cf.[380], [381]), and B.F.Skinner (1904 -1990)(cf. [340], [345]). For a general overview see Bower and Hilgard (1981, German: 1983/4)[31].
If one has to rely on the observable behavior then the behavior can be understood as a sequence of stimulus-response pairs bound to certain points of time like . This observable set defines a finite and incremental (and partial) empirical behavior function .
The transition from a fixed or static behavior function to a learning behavior function happens at that moment when we can identify a time point after which we can observe at least one stimulus which is now connected to a new response . To classify a new stimulus-response pair as learned one has to give a practical criterion which classifies this pair as 'beyond poor chance'. Thus the empirical behavior function can change and grow and thereby extend the space of possible behavior. Some of these pairs will be more often used than others.
To keep the property of uniqueness for the function we have to claim that the new emergence of a pair after the pair will 'eliminate' the future occurrence of , .
The relationship between the empirical incremental as well as eliminating learning function and a possible theoretical learning function can be summarized as follows:
(9.8) |
If we would not allow the theoretical behavior function to modify its own states like then such a fixed theoretical behavior function could not support a flexible empirical behavior function. Only the alternative format can change sufficiently to allow the emergence of new stimulus-response pairs.
The general format of the system function doesn't tell too much about the working of the function. In the following we will have a look to at least two kinds of implementations of learning systems functions. These are rule-based systems called learning classifier systems as well as artificial neural networks .
Historically it was John H.Holland who in the years 1976, 1980 and 1986 published this kind of a learning mechanism (cf. Holland (1975/ 1992)[161]:pp.171ff. A first official textbook with learning classifier systems was Goldberg (1989) [122]:pp.21f. A very inspiring paper about learning classifier systems is from Wilson (1994)[422]. Another interesting explication can be found in Holland (1995)[162]:pp.41ff. But there are many hundreds of papers and books meanwhile about this topic. Thus this text here can only be a first pointer into the subject. For the following discussion we will concentrate on the paper of Wilson (1994) because this seems to be a very good summary of the key ideas of Holland optimized after many discussions which followed the publications of Holland.
The general idea is that the internal states are rules called classifiers. This set of rules can grow and can be changed. Thus the theoretical behavior function will change depending from this set of rules.
In the case of neurons - which we will discuss later - the theoretical behavior function is based on sets of neurons which can change as well as the different possible connections as well as weights.
Gerd Doeben-Henisch 2013-01-14