As Bérard et al. (2001)[34]:Pp.14ff,Pp.62ff point out, there are many application scenarios where it is to complicated to model a problem as one big automaton. Instead one models several smaller automata and combines these afterwards to one unit. A typical strategy to connect automata is usually the socalled synchronized product, which has been introduced by Maurice Nivat 1979 (cf. Arnold and Nivat (1982) [20] and Arnold (1992) [18]).
This idea is straightforward but has the drawback, that the cartesian product of all sets can blow up the state space beyond practical limits. Another, more conservative approach could be, to restrict the interaction of the different automata strictly to their input output. A good example is the biological brain where billions of neurons are interconnected forming one big automaton.
Let us consider a simple example consisting of four neurons {A, B, C, D} from the McCulloch-Pitts type (cf. McCulloch and Pitts (1943) [195])(cf. figure 5.6).
These four neurons are interconnected. Let us look to every such neuron as a finite automaton of the type of a finite transducer (cf. figure 5.7).
A finite transducer has an input tape and an output tape. Every neuron of the McCulloch-Pitts type can be considered as such a finite transducer. All the input neurons together are representing the actual input and the output represents the outgoing axon. A neuron can then be considered as an automaton representing some function wit , where the as well as the will be considered as words from the sets (words about the input alphabet ) and (words about the output alphabet ). We will make the assumption that the output is given at point in time t+1 when the input is given to the automaton at point in time t: output = f(input). Thus the output at point in time t+1 can be redirected as a new input at time t+1 (cf. figure 5.8).
In the next step we consider the cooperation of two such automata. The output of automata A can be redirected -by the environment E- to the input of the automaton A itself but also to the input of the automaton B, and vice versa. One has only to assume that the input and the output tapes are arranged in sections; one section per transition and every section is subdivided into subsections thus that every participating automaton has one subsection of point in time t for input as well as for output (see figure 5.7). Because the input is assumed to be a string we can assume that the output of different automata can be inserted in one of these input strings constituting different parts .
To formalize this we take two automata and with the structure
In the example with the four neurons A - D (cf. the example with the four McCulloch-Pitts neurons in figure 5.10) we have neuron A receiving an input from the environment named . Neuron B is receiving an input from the environment as well as an input as well as an input from another neuron D. neuron C is receiving inputs from the neurons A and B, and neuron D is receiving inputs from itself as well as from neuron B and some source .
Let us consider an automaton for a moment as a function . The execution of connected automata can then be formalized as a sequence of configurations where every element consists of two parts: a control part and a connection part.
The control part is the sequence of configuration elements like . This means that at this point in time every participating automaton has some actual input as argument for the function . By the definition of the function one can compute the output of the function. New is the connection part as . Every output is here mapped onto the input of some destination. With -many participating automata would this amount to many input fields. But in most practical cases is the mean number of output connections much much smaller than the number of participating automata. Let us look to the example of the four interconnected neurons .
The automata functions are defined as follows:
A(in) = | out |
1 | 1 |
0 | 0 |
B(in) = | out | ||
1 | 1 | 1 | 1 |
1 | 1 | 0 | 1 |
1 | 0 | 1 | 1 |
0 | 1 | 1 | 1 |
1 | 0 | 0 | 0 |
0 | 1 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 0 | 0 | 0 |
C(in) = | out | ||
1 | 1 | 1 | |
1 | 0 | 1 | |
0 | 1 | 1 | |
0 | 0 | 0 |
D(in) = | out | ||
1 | 1 | 1 | 1 |
1 | 1 | 0 | 1 |
1 | 0 | 1 | 1 |
0 | 1 | 1 | 1 |
1 | 0 | 0 | 0 |
0 | 1 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 0 | 0 | 0 |
With these functions one can construct a possible execution as follows:
This execution shows some interesting dynamics in the behavior of these four neurons. There is a change in the observable behavior of this small network which happens after the occurence of S1 and S2 synchronously being '1':
S1 | S2 | |
1 | 1 | 1 |
1 | 0 | 1 |
0 | 1 | 0 |
0 | 0 | 0 |
1 | 1 | 1 |
1 | 0 | 1 |
0 | 1 | 1 |
0 | 0 | 0 |
This ability that a function can change its behavior during the course of time triggered by some observable event is in psychology and biology usually called learning. In the example we can observe the simplest known case of learning called classical conditioning. From this example follows that the execution protocol of connected automata can be used to document processes which show the property of learning behavior. With appropriate extensions of CTL-specifications and model checking one can then try to explore observable behavior of automata with regard to learning properties.
So far we are using in the execution protocol the abstraction that we take automata as whole units like and compute the output of the whole automata function. Someone could raise the question whether this can bring up a conflict between different automata if these have different numbers of states and therefore they have possibly different execution times because the execution time for all participating automata is limited to one cycle .
In the general case one can replace the automata function like by a finite list of states where every state is represented by a finite set of lines where every line is written as
But assuming k-many participating automata working in parallel one has to consider that the following can happen: (i) one automaton reaches a final state before all the other automata reach a final state, or (ii) at least one automaton will never reach a final state; it will run infinitely, or (iii) no automaton will reach a final state and will run infinitely.
In case (i) where an automaton reaches a final state this will be documented in an execution protocol such that from that point in time, where the automaton reaches a final state the changes will stop. All values will stay to be fixed. We call this case monotonic finite.
In case (ii) where an automaton will never reache a final state this will be documented in an execution protocol such that the execution will not stop and there will no final state show up. We call this case nonmonotonic finite.
In case (iii) where all automata will never reache a final state this will be documented in an execution protocol such that the execution will not stop in no trace and there will no final state show up. We call this case monotonic infinite.
If one wants to keep the information about all participating automata separated and not unified into one 'big' automaton one can define the following structure:
Def.: A Network of Connected Automata (NCA) is given as a structure of the following kind:
Def.: An Execution of a Network of Connected Automata ( ) is given as a sequence a sequence of configurations where every element consists of two parts: a control part and a connection part (see above).
Gerd Doeben-Henisch 2010-03-03