Semiotic Agents as Memetic Systems

-- This chapter has to be rewritten beause the context has changed !! --

As has been stated in the introduction the primary goal of this lecture is the description how to built intelligent systems which can support humans in a sufficient human like manner. This includes the ability to communicate with humans with a normal language and by demonstrating the usual way of understanding such languages. Besides the presentation given in the introduction with the frameworks for genetic evolution, for engineering, for computation and for the experimental setup we have to add some requirements for sign usage and memory.

Some of the ideas for semiotic agents can already be found in Doeben-Henisch 2007 [67], Doeben-Henisch et al. 2009 [71], and Doeben-Henisch et al. 2011 [76]. What will be new compared to these papers is e.g. the combination of genetic evolution based on genetic information written as 'grammars' and the usage of classifier systems within such grammars as starting point for a memory structure.

Again a concrete application scenario will be used to study this kind of systems. We will assume as scenario maze learning as it has been used by E.C. Tolman in his famous experiments to demonstrate the existence of cognitive maps in rats (cf. Tolman 1948 [322]). We will enhance this scenario stepwise by associating the different possible experiences of these mazes with the expression of a certain natural language $ L_{i} \in L$. Thus a learning system is not only in its world exposed to 'ordinary objects' but additionally with special objects working as 'sign material' for the possible usage as signs having some meaning. For the definition of 'sign', 'meaning', 'sign usage' etc. the concept of C.S.Peirce (1839-1914), one of the founders of modern semiotics, will be used as it has been formally described in Doeben-Henisch 2007 [67].

To learn the right usage of language expressions serving as 'signs' is today also subsumed under the heading of language games. Famous are the talking heads experiments from Steels 19957.1 Steels 1995 [305], Steels 1990[303], which afterwards have triggered lots of new experiments like (cf. Steels 2001 [306], Steels 2003 [307], Steels 2006 [308], Loula et.al 2007 [196], Belpaeme et al. 2009 [23], Baronchelli 2010 [20], Nolfi et al. 2010 [231]).

Closely associated with the Peirceian concept of sign is the concept of a consciousness, which is understood here as a kind of an interface between the world of experience and the complexity of the brain and the body. Although the topic of 'consciousness' has a long tradition in philosophy, the discussion gained a strong new momentum when Chalmers introduced 1995 the labels 'hard' and 'easy' to classify scientific problems with regard to the discussion about the phenomenon of 'consciousness' (cf. Chalmers 1995 [45], Chalmers 1997 [46]). For Chalmers the hard problem exists because we can notify the brute fact that we are conscious about something. These kind of data are not reducible. To construct some mapping from third person measurements to phenomenal data (first person data) one has to elaborate these phenomenal data explicitly as a structure. The best known methodology today to exploit phenomenological data is demonstrated by phenomenology (cf. Chalmers 1997 [46]:413f, Varela 1996 [330]). To construct functions based on the neuronal data - like e.g. the global workspace theory of Baars and others (cf. Baars 1993 [12], Baars 1996 [13], Baars et al. 2007 [15]) - will not suffice to explain the hard problem according to Chalmers.

Knowing this background and accepting the behavioral sciences Doeben-Henisch has worked out a conceptual framework which is intended to include the hard problem. In Doeben-Henisch 1995 [66] and Doeben-Henisch 2007 [67] one can distinguish at least three fundamental kinds of data which are not reducible to each other: Behavioral data (SR), neurophysiological data (NN), as well as phenomenological data (PH). Each of these data sets allows for formal model (theory) building T$ _{SR}$, T$ _{NN}$, and T$ _{PH}$. Done in the right way one can then map the structures of these models onto each other. This is not to misunderstand as a kind of reductionism, rather it can show whether there could be some structural coupling between structures. Thus if one assumes that events within the phenomenal consciousness$ _{PH}$ are 'caused' by certain neural events, then it would be important to identify within a formal neural theory T$ _{NN}$ those parts which can be 'correlated' with phenomenal events and which thereby could be labeled neural correlates of consciousness as consciousness$ _{NN}$.

There is still a very broad ongoing discussion about the relationship between consciousness and neural events (Cf. e.g. Journal of Consciousness Studies 1993 []). While e.g. Revonsuo describes the relationship between the consciousness and the brain in a way which supports a reduction of the consciousness to physiological states ('naturalizing phenomenology') Revonsuo 2000 [251], does MacLennan argue against reductionism7.2. But even MacLennan thinks he has to assume 'below' the level of phenomena some 'protophenomena'7.3 which can not be perceived 'consciously' but which can be 'postulated' to allow a mapping onto elementary neural events. From the theoretical point of a formal mapping this is not necessary. It is even implausible to do this because it could be - and in most cases this seems to be highly probable - that a phenomenon which appears as a phenomenological 'simple' phenomenon can correspond to highly complex neural processes distributed over many areas in the brain showing complex timing patterns too. Thus if we assume that the evolutionary progress has to be identified with this kind of 'abstraction' mapping complex neural processes into 'simple' phenomenological phenomena then the assumption of a reduction would be misleading; it is not a mere one-to-one mapping.

Furthermore if we distinguish formally between a phenomenological theory T$ _{PH}$ and a neural theory T$ _{NN}$ then we can replace the neural theory T$ _{NN}$ by any other theory T$ _{X}$ which 'does the job'. Thus if we can construct a formal model T$ _{SW}$ of a software agent using e.g. classifiers which are different from neurons and this model generates the 'same' phenomena as a neural theory T$ _{NN}$, then this will not change our theory of the consciousness as part of a learning semiotic system. It only changes the concrete machinery 'behind' the consciousness$ _{PH}$ which is responsible for the computation. Within the paradigm of conscious learning semiotic systems (CLSS) we will therefore distinguish only two parts: the conscious part and the non-conscious part. The latter is responsible for the needed computations and one main structure will be a memory structure. These structures will only be defined by the observable behavior like in psychology which distinguishes e.g. a short term and a long term memory, and within the long term memory e.g. between an episodic as well as a procedural memory. Although these existing models will be discussed we will not use these structures but new ones which can be understood as a further development of classifier systems using multiple levels of abstractions.

Gerd Doeben-Henisch 2012-03-31