Consciousness

(A first sketch)

According to the papers [67], [70], especially [76], we need agents with consciousness.

Figure 6.1 shows a selection of scientific disciplines, which are involved in the subject of consciousness. Neurosciences for the neural structures of the brain, Physiology and Anatomy for the body, evolutionary Biology for the population, Physics and Chemistry for the environment, Psychology and Sociology for the observable behavior, Philosophy for the structure of the subjective experience, to mention only a few. These all are important and necessary, but all these views are not unified. Attempts to elaborate a unifying theoretical framework within the Philosophy of Science during the 20th century could not succeed.

Figure 6.1: A Subset of Relevant Disciplines with Regard to Consciousness
\includegraphics[width=4.5in]{consciousness_factors_disciplins_3-5inch.eps}

In this chapter the philosophical view shall be investigated with regard to the possibilities to embed it in an artificial learning agent. For this we start with the general framework given in the introduction and define then a formal model of consciousness within this framework.

As basic elements of the framework we have assumed an environment [E] as a collection of tasks, which can be solved in this environment. Some agents [A], which can be belong to different types. An interface [$ \iota$] which realizes the mapping from environmental states into the agent as well as a mapping from agent states to the environment.And finally some Experiments [$ \mu$] which define procedures how one can initialize agents to solve some tasks and to log all important parameters during the behavior sequence.

The agents itself have been further divided into an Agent-Base [AB] and an Agent-System Function [$ \gamma$]. Formally this has been written -with simplifications- as a definition of an experimental framework [EF] as:


$\displaystyle EF(x)$ $\displaystyle iff$ $\displaystyle x = \langle E, A, \iota, \mu\rangle$ (6.1)
$\displaystyle \iota$ $\displaystyle :$ $\displaystyle E \leftrightarrow A$ (6.2)
$\displaystyle \mu$ $\displaystyle :$ $\displaystyle E \times \iota \times A \longmapsto LOG$ (6.3)
$\displaystyle A(x)$ $\displaystyle iff$ $\displaystyle x = \langle AB, \gamma\rangle$ (6.4)

with $ LOG$ as a set of values indicating some performances. Within the interface one can assume as a general principle, that there is a mapping from the environment to the agent as one mapping $ ainp$ and a mapping from the agent to the environment $ aout$. Furthermore one can distinguish usual input $ ainp$ from a more abstract input $ fit$ called 'fitness function'.


$\displaystyle \iota$ $\displaystyle =$ $\displaystyle (ainp \cup fit) \oplus aout$ (6.5)
$\displaystyle ainp$ $\displaystyle :$ $\displaystyle E \times A \longmapsto \Sigma^{*}$ (6.6)
$\displaystyle fit$ $\displaystyle :$ $\displaystyle E \times A \longmapsto F$ (6.7)
$\displaystyle aout$ $\displaystyle :$ $\displaystyle A \rightarrow \Xi^{*}$ (6.8)

$ \Sigma$ as well as $ \Xi$ are special alphabets to encode input or output messages. 'F' is a set of fitness values. What has not yet been described here was the internal structure of the agent(s).



Subsections
Gerd Doeben-Henisch 2012-03-31