Necessary Memory

In a next step we have to assume a memory capable of storing important aspects of the consciousness. In a first assumption we assume a storage which simply stores the whole 'content' of the consciousness in the memory. This means that a complete C-triplet will be stored in memory. This would result in a set of C-triplets. Such a set could be interpreted as a set of classifiers, onto which one could apply genetic operators. But instead of applying the genetic operators directly one would not resolve the dissemination problem of the feedback values. Instead one has to look for a formalism which allows to realize the concept of successful behavior.

Discussing this topic Marcus Pfaff proposed a vector space of appropriate dimensions where the states of the system are 'points' in this space and the vectors leading from a start state to a goal state represent an 'optimal' path which can have 'costs' as well as 'earnings'. This has been accepted from the group as a good proposal. It has then been pointed out that this concept of a vector space could directly be translated into a M-graph (short for memory graph) where a minimal M-graph has the format $ \langle \sigma_{q}, \delta_{q}, \xi_{q} \rangle \longmapsto
\langle \sigma_{q'}, \delta_{q'}, \xi_{q'} \rangle$ with $ q'$ as that state which follows state $ q$ in the overall agent process $ \pi$. Therefore one can interpret a minimal M-graph as follows: while an agent in state $ q$ perceives the input string $ \sigma_{q}$ combined with the drive value $ \delta_{q}$ he is doing an action $ \xi_{q}$. In the following state $ q'$ the agent perceives the input string $ \sigma_{q'}$ combined with the drive value $ \delta_{q'}$ he is doing action $ \xi_{q'}$. This encodes the additional information that the perception $ \sigma_{q'}$ and the drive value $ \delta_{q'}$ can be a consequence of the preceding action $ \xi_{q}$. With this encoding one has a direct representation of all realized paths in the past which allows several additional labels and computations. Moreover it is possible to dynamically change such a graph by the operation of 'forgetting' a state or of 'extending' the M-graph by new states.

Figure 4.10: Outline of Agent2 with Artificial Consciousness (in the picture 'E' is used instead of $ \delta $)
\includegraphics[width=3.0in]{agent2_consciousnes_4nov2010_3.0in.eps}

The group discussed also the idea of meta concepts which can be generated 'about' an M-graph. Thus having a a path with recurring minimal M-graphs it could be interesting to 'condense' these repetitions in a kind of a 'M-graph pattern' which allows a kind of an 'abbreviation' or 'abstraction'.

Thus we can outline the following hypothetical structure of the agent2 system with an artificial consciousness:


$\displaystyle AGENT2(x)$ $\displaystyle iff$ $\displaystyle x = \langle IS, AC, \Sigma^{*}, \Xi^{*}, MEM\rangle$ (4.112)
$\displaystyle IS$ $\displaystyle :=$ $\displaystyle Internal States$ (4.113)
$\displaystyle AC$ $\displaystyle \subseteq$ $\displaystyle IS (Artificial Consciousness)$ (4.114)
$\displaystyle \Sigma^{*}$ $\displaystyle \subseteq$ $\displaystyle IS; Set of Input strings \sigma$ (4.115)
$\displaystyle \Xi^{*}$ $\displaystyle \subseteq$ $\displaystyle IS; Set of Output strings \xi$ (4.116)
$\displaystyle MEM$ $\displaystyle \in$ $\displaystyle IS; Memory Structure$ (4.117)

Figure 4.11: Memory as knowledge optimization
\includegraphics[width=3.5in]{memory_simpleGA_complexGA_3.5in.eps}

But before diving into the details of a memory system it is helpful to elaborate a bit more the implicit optimization structure of knowledge which shows up in the combination with the 'driving part' of the agent (cf. figure 4.11).

As point of reference we are using a memory organized as an 'ordinary' classifier system ($ MEM_{0}$). In an ordinary memory the memory elements (M-elements) are not connected. Nevertheless they can receive a feedback value as 'isolated points' in the space. They are selected by 'individual' similarity. Applying genetic operators is possible. Cross over can generate new combinations of perception and actions on a local basis. Mutation can introduce new perceptions or new actions, even those which do not make sense. Because all these genetic operators do not establish useful connections they can improve the whole set of M-elements only very weakly.

The new kind of memory ($ MEM_{1}$) establishes connections between the M-elements, connections which encode useful spatial informations. Additionally is here the difference of the drive states also encoded. When in $ MEM_{1}$ a M-element has to be selected then one can use the similarity as in the case of $ MEM_{0}$, but additionally one can exploit possible connections leading from high drives to less drives. Associated with such a path one can exploit possible costs of different paths. This improves the selection operation strongly. Another extension is the possibility to establish a new connection by playing around while not in a drive state. Playing around explores new possibilities and functions like mutation in a classical genetic algorithm. Another extension could be -like cross over in a classical GA- to organize existing paths into new connections. But this includes the possibility to generate a path which does not work in the real environment. To hinder a possible confusion of the memory one has to require that the memory has on its meta level a kind of a virtual model where one can generate new combinations but only as hypothetical constructs. One can these hypothetical constructs use for actions but one has to be aware that these hypothetical constructs could be non-real. If a hypothetical construct becomes confirmed then the real counterpart can be accepted as a 'real extension'.

From these considerations one can get some first ideas of the implicit power of a memory system if it has the 'right coding'.

Gerd Doeben-Henisch 2012-03-31