The Term 'Consciousness'

Figure 2.6: Brain functional architecture and consciousness, Working Hypothesis

As I have stated in the preface (cf. 1) the term 'consciousness' does not belong to our key terms. Nevertheless the term 'consciousness' is widely used in many contexts. Therefore we will give here a short hint how a 'technical version' of the term 'consciousness' could be established within our theory of intelligent evolutionary semiotic systems.The following working hypothesis seems to have some plausibility with regard to the known facts2.7:

  1. The most general functional architecture of the brain seems to be the flow of information from the 'boarders' of the brain into more and more complex levels - organized in areas - as well as 'between' different areas as well as 'back' from more advanced levels to lower levels (cf. figure 2.6). The 'upward' flow realizes abstractions and generalizations, the flow 'between' realizes associations, and the 'down'-flow realizes feedbacks allowing control loops.

  2. Consciousness [C] then is a coordinated set of areas which 'represent' some subset of active states. The 'content' of the consciousness can change in time. Those states of the system which can become conscious but are actually not in the set of conscious states are here called sub-consciousness [SC]. Those states of the system which never can become 'conscious' are here called un-consciousness [UC].

  3. It seems that the psychological counterpart of the philosophical consciousness as well as these assumed special sets of conscious states is the so-called working memory (or 'short term memory (STM)) which coordinates the sensory input and the available knowledge in the long-term memory (LTM).

  4. Because we know today that the working memory is very limited and that the long-term memory can help to overcome these limits by creating more and more advanced encodings in the sense that one 'unit' of the working memory is not a 'simple' information but the 'link' to a complex structure representing the 'real' information, we assume here that the general function of long-term memory is the dynamical construction of ever increasing complex structures ('abstraction', 'generalizations', 'categories', 'associated structures', 'symbolic structures', etc.) to use the limited scope of the working memory to 'encode' the stream of sensory data into implicit structures.

  5. In a certain sense one can compare the brain with a state machine where the 'working memory' seems to be the actual state which can be triggered by several sources to change its actual state according to structural and probabilistic informations. The whole machinery works like a simulator updating the actual state regularly.

Remark: The main question is, whether there exists a problem which we want to solve, where we need the concept of a 'consciousness' to solve the problem. One possible problem I see until now is the interface between language models and the brain. To map a language model into the brain requires a model of the brain which can serve as a counterpart for the language. In the language I have e.g. word-token, word-types and the meaning of word-types. Whatever will be the machinery of the brain the brain theory must offer brain-constructs which can serve as counterparts. Actually it can not be seen, what these counterparts could be because nobody has something which we could call a brain-theory in the right sense. If we would have such a brain-theory, then - I would guess - at least all these counterparts of language which we need for language-based communications would be candidates for being constituents of the consciousness. The consciousness then would function as an 'Interface' for interactions with the brain without dealing with the details of the machinery.

Gerd Doeben-Henisch 2013-01-14