!!! Draft Version !!! Please do not cite !!! Only for internal communication !!!


Reconstructing Human Intelligence within Computational Sciences -

A Theory of Science Point of View

Gerd Doeben-Henisch

{gerd_doeben-henisch@ieee.org}



1. Bootstrapping


The term computational science is a fairly new and broad term focusing on the aspect of computation and is highly unspecific to the possible domains of applications. Thus, speaking of "Reconstructing Human Intelligence" introduces a specific topic that, in combination with computation, raises a wide spectrum of exciting options. Therefore, dealing with the term human intelligence in connection with computation is an important strategic decision with far reaching consequences.


Engineers have the tendency to focus mainly on technical solutions without bothering too much whether this has a correspondence to certain pre-scientific phenomena called intelligence. They exploit the theoretical and technological possibilities as given and try to solve a problem by constructing technical solutions; intelligence as such is for them typically not a problem, it is a pattern of phenomena occurring in our daily experience. Engineers can construct artificial devices, which can show a behaviour, which probably looks like intelligent behaviour.


Opposed to this, scientists are mostly interested in reconstructing phenomena, understanding, setting up models and theories for explanations. Whether these theories can solve some problems in the future is not the primary motivation for doing science. Elaborated scientific theories eventually can describe sets of properties and regularities that are claimed to be part of what pre-scientifically is called intelligence.


This essay deals with the scientific reconstruction of the pre-scientific phenomenon of intelligence and compares this view with the engineering perspective. In the context of elaborating scientific models of human intelligence, does the additional postulate of being computable introduce a strong heuristic component. It is not a trivial question at all whether computable models can model the empirical phenomenon of intelligence. If computable models would be principally too weak then one has to consider the question: Which properties of the phenomenon of intelligence do computable models not cover?


Furthermore dos the focus on the subject of human intelligence not necessarily exclude non-human intelligence. Nonhuman intelligence shares properties with other human intelligence and can contribute important insights into the nature of intelligence as such.


Within this general reconstruction of human intelligence, special attention is also given to the computational semiotics approach within computational science, which seems to be at the centre of intelligence.


The view on these investigations and reflections are from the viewpoint of the theory of science, which is different from philosophy of science. Theory of science provides a framework, which is broad enough to cover all the relevant empirical disciplines, but at the same time is also specific enough to be usable for engineering.


An in-depth development of this topic is beyond the scope of this chapter. Therefore, this chapter should be seen as an essay, trying to give a first outline of the idea of an integrated approach to scientifically motivated computational intelligence.


This essay starts with an historical overview of a multitude of disciplines which all have contributed some aspects the the overall phenomenon of intelligence. As a result of this overwiew the author proposes an enlarged view on the subject of intelligence including the whole evolutionary perspective known today. In this proposal he presents two problems which appear as really deep challenges for the upcoming generations. Asking for the appropriate framework for a systematic treatment of these challenges he introduces the methods of modern software engineering and theory of science. His diagnosis is that softwareengineering in its present state has severe deficiencies and is not yet capable of a full treatment of the problem under investigation. Within the scope of theory of science he depicts some possible pathway to follow. He shows that this pathway is computationally tracktable. Coming back to the problem under investigation the author explains why the phenomenon of communication is a genuin and highly important part of an adaptation based intelligence. He introduces then the disciplin of computational semiotics as a main disciplin to deal with the phenomnon of communication as inner part of evolution. He concludes this essay with reflections about the challenge of symbolic knowledge for human intelligence as well as for computational intelligence, especicially if computational intelligence shall assist human intelligence in the course of life.


2. A Historical Account showing the Fuzziness of Intelligence


At the time of this writing the term intelligence is highly loaded with different associated meanings, which are deeply rooted in history and also spread over a multitude of disciplines. Before considering a more systematic approach to the fuzzy phenomenon of intelligence, the current main aspects of the meaning of intelligence are investigated. In his excellent book The Evolution of Communication (1996) the ethologist Marc D.Hauser gives a strong warning about the scientific status of the term intelligence stating:"… no one has managed to delineate the key symptoms of intelligence in humans, let alone operationalize the concept for cross-species comparisons …" (p.112). This is also the warning of the authors in the German book Die Entdeckung der Intelligenz – oder Können Ameisen denken? (The Discovery of Intelligence - Can Ants Think?) (1998) by Holk Cruse, Jeffrey Dean, and Helge Ritter. They stress the point that all the factors are still not really known that are involved in the multifaceted phenomenon of intelligence and therefore one should not start with a too narrow definition in the beginning (cf. p.21). Very similar are the statement of Rolf Pfeifer and Christian Scheier in "Understanding Intelligence" (1999)(p.6 ff). But, how can we start then?


Perhaps, a good starting point is a map of the main disciplines that are directly involved in the investigations of the phenomenon of intelligence to date. Figure 1 gives a coarse outline revealing that there are many layers of complexity involved in the phenomenon of intelligence.


Historically, philosophy was first in trying to understand intelligence as part of human behaviour and self-experience. This was mainly a search for structures while digging in the realm of intuitive self-perception. This continues until today, only slightly improved by a sloppy usage of formal models to structure subjective experience. When Wundt founded in 1879 his famous psychological laboratory in Leipzig (Germany) this marked the beginning of a more experimentally based psychology, which lead to the movement of behaviouristic psychology, mainly in the USA (e.g. J.B.Watson, E.L.Thorndike, I.P.Pawlow, E.R.Guthrie, C.L.Hull, B.F.Skinner; for an extensive overview see [Bower/ Hilgard 1981, vol.1]) during the first half of the 20th century. By restricting allowed facts to observable behaviour and observable properties of the environment, behaviourism tried to reconstruct biological systems by the interplay of stimuli and responses. As more results had become available, the limits of this approach were exposed. The method to explain behaviour only based on stimuli and responses was on account of the inherent complexity not really feasible (cf. e.g. Chomsky's review (1959) of Skinner's Verbal Behaviour). During that time the knowledge was increasing that overtly observable behaviour is rooted in the physiological and especially neurological machinery.




Figure 1: Network of disciplines involved in the research of intelligence


When the movement of behaviourism was slowly declining, a new more differentiated movement began to rise from the midst of the twentieth century: Modern Ethology (ethos – Greek: behaviour). Tinbergen and K.Lorenz are known as the founders of this new movement which is based on the work of several forerunners. They used the behaviour like the behaviourists but they tried to look to the whole picture of an organism: The observable behaviour; the underlying neuronal mechanisms; the growth process; endogen and learned behaviour; structural similarities between individuals; species etc. (see for an overview for this time Irenäus Eibl-Eibesfeldt [1980, 6th rev.ed.] and Klaus Immelmann et al. [1982]). Human ethology was part of the general ethology thus demonstrating the continuity of the phenomenon of human behaviour with non-human behaviour as well as the individual characteristics of human behaviour different to all the other species. It seems to be especially this marriage between behaviour sciences and neurobiology which gives first real insights in the nature of intelligence in the sense that those causal mechanisms now became known which are responsible for the behaviour of living systems exposed to specific environments.


This whole behavioural approach is framed by a general developmental view stemming from sciences like geology, palaeontology, physics, cosmology, chemistry, and biology. In this developmental framework (cf. Peter J.Bowler (1989) Evolution), the bodies with their organs and neuronal systems have to be understood as a phenotype, which is the product of a genotype. A genotype together with a phenotype has undergone dramatic changes during the course of time. Thus, a certain kind of behaviour revealing a certain type of intelligence has to be seen as a dynamic phenomenon bound to some window of time within a bigger change process influenced by certain types of environments. Knowing this implies that asking for intelligence can perhaps mean asking for that structure which is the implicit working principle within all these changes and behind all kinds of genotypes and phenotypes.


At the top of the complexity hierarchy, coupled with the phenomenon of intelligence, are the most complex phenomena like languages, communication, populations, institutions, arts, economic behaviour, cultures, and science itself. The social-oriented sciences are numerous as well: Sociology, anthropology, linguistics, history, political sciences, economics, etc.


Most of the disciplines mentioned above wouldn't have been possible without at least logic and mathematics. The dynamic character of many of the phenomena under investigation as well as their inherent complexity can only be handled by a representation mechanism which is elaborated enough to solve the task. This is clearly modern logic and mathematics. Since the midst of the 20th century these disciplines have been extended with modern computer science.


Rooted deeply in Aristotle's Logic for centuries the rise of modern Logic began only at the end of the 19th century with Boole, Frege, Whitehead and Russell and was closely related to the fundamental discussions in meta mathematics about the foundations and limits of mathematics as such (Cantor, Hilbert, Bouwer, Fraenkel, Gödel et al)(cf. for a good overview W.Kneale/ M.Kneale, 1986, The Development of Logic). And it was this lengthy and hot discussion about the decidability of formal theories which not only led to the limiting results of Gödel 1931, but also to the birth of the modern concept of computation by Turing 1936/7 with his concept of the Turing machine, as it has been called by others. Since then have all the different attempts to find alternative concepts only lead to one result: every known formalism claiming to be an example of a computable formalism could be proved to be either a subcase of the Turing machine or to be equivalent to it (cf. for an overview see besides W.Kneale/ M.Kneale also: M.Davis (ed): The Undecidable. Basic Papers On Undecidable Propositions, Unsolvable Problems And Computable Functions (1965) and Robin Gandy "The Confluence of Ideas in 1936" (1988)).


Although Turing himself was connected to the development of earliest computing machines in Great Britain (cf. Andrew Hodges [1983, 2nd ed. 1994], Alan Turing, Enigma), it was only the advent of the transistor in 1947 and then the first integrated microprocessors in 1971 when the practical usage of modern affordable universal programmable machines started.


With the availability of a commonly accepted concept of computation it was possible to define other important concepts like decidability and complexity based on computing processes or to relate other concepts like formal language and formal grammar to computation.


One special contribution of Computer Science to the field of intelligence was the rise of the Artificial Intelligence movement within computer science. Starting in the 1960s with research in automated reasoning leading to computational logic (cf. Jörg Siekmann, Graham Wrightson (eds)[1983], Automation of Reasoning), the paradigm of expert systems was gaining attention during the 1980s, mostly based on concepts of computational logic (cf. Randall DAVIS/ Douglas B.LENAT [1982], "Knowledge-Based Systems in Artificial Intelligence"; Elaine Rich [1983], "Artificial Intelligence"; Nils J.NILSSON [1982], "Principles of Artificial Intelligence"); that kind of artificial intelligence (AI) was also becoming known as socalled symbolic approach to intelligence. The inherent problems of the symbolic approach at this time (e.g. difficulties with fuzzy concepts, adaptation as well as robustness) lead to an alternative paradigm called parallel distributed processing (cf. David E.RUMELHART/ James L.McCLELLAND et al. [1986] "Parallel Distributed Processing. Explorations in the Microstructure of Cognition"; Stephen Grossberg [1982], "Studies of Mind and Brain”). This approach tried to set up models, which copied to some, extend the structure of biological neuronal networks. Analogously to the concept of automaton and formal language, which are formally equivalent and subcases of the general concept of computation are the paradigms of symbolic and non-symbolic artificial intelligence formally equivalent but allow practically different kinds of applications. Already in the 1990s the concept of artificial intelligence is exploding with applications and mixing up of methods and paradigms and with the generation of new paradigms, that it is becoming more and more difficult to give a systematic account of all the developments now in action (cf. the good reader edited by Patrick Henry Winston and Sarah Alexandra Shellard [1990], Artificial Intelligence at MIT Expanding Frontiers, Vol.1+2).


Already in the 1950s, when the idea of an electronic computer was just in its infancies, Booth, Weaver and Bar-Hillel have launched the usage of computers for an automatic translation of texts (cf. the historical introduction by Herbert A.Bruderer (ed) [1982], "Automatische Sprachübersetzung"). Although this first approach was not very successful because the early protagonists had not enough knowledge in the realm of linguistics and psycholinguistics, it leads afterwards to the new discipline of computational linguistics. Broadening the scope from machine translation to automatic indexing of documents as well as dialogue systems the discipline moved away from a purely descriptive paradigm to an approach which included more and more aspects of the simulation of the understanding and production of language (cf. Istvan S.Batori, Winfried Lenders, Wolfgang Puschke (eds) [1989], chap.1). The main lesson from this development was the insight that true success in the field of computational linguistics was not possible within the limits of computation alone; inevitably needed was specific knowledge from within the area of linguistics.



The advent of practically usable computing machines introduced also a new strong spirit in the field of psychology, especially cognitive psychology. For half a century captured within the methological boundaries of the stimulus-response paradigm of behaviouristic psychology came a new momentum from the fields of mathematical communication theory (Shannon, Weaver, Broadbent, Hoveland, Miller) as well as from linguistics. During the 1950s it became e.g. apparent that a string of words acting as stimuli has to be distinguished from the content, which can be communicated by such words; although a person can usually remember the content, it can not remember the individual words used within communications. Thus to deal with such facts one needs a new conceptual framework. And it was the computer-simulation model which gained the status of a new technique for theory-construction within psychology (cf. W.K.Estes (ed) [1975], "Handbook of Learning and Cognitive Processes". Vol.1, chapter 1). Computer based models allowed the modelling of causal mechanisms explaining why a learner is undergoing several changes in his behaviour. Complex behaviour like e.g. verbal learning, concept formation and problem solving became now possible subjects of theory building in cognitive psychology. But this strong extension of the conceptual power of cognitive psychology by using algorithms raised a lot of methodological questions: How can the paradigm of an organism as a computer-based information-processing system be a valuable basis for an empirical sound theory about human cognition? How can this approach account for the fact that the human cognition is embedded in a body, which is part of a biological process generating biological structures of a certain shape, which is --at a first glance-- completely different to such a computational structure? Isn't it fallacious to use psychological terms taken from everyday human experience as synonym with theoretical terms of formal cognitive theories being part of information -processing structures which are different from the biological structures of human bodies?


There was some first reaction to the classical AI approach as well as to the symbol oriented Cognitive psychology approach which can be identified with the so-called new AI approach or the embodied cognitive science approach, which started in the end of the 1980s (cf. Rolf Pfeifer and Christian Scheier (eds) "Understanding Intelligence" (1999)). Faced with the shortcomings of the information-based paradigm of cognitive psychology this movement identified the root of intelligence in the interaction of the body with the real world. The ability to adapt in a changing environment to benefit at best from the living conditions is understood as the manifestation of this kind of intelligence (cf. [Pfeifer/Scheier 1999:20f]). Here the autonomous agent with a body interacting with the environment has to be the main target of research, of experiments, of artefacts and of simulations.


Can this be now our final answer? Are all the questions raised by the multidisciplinary complexity of the phenomenon now convincingly framed by the embodied paradigm? Building some formal computational structures and labelling them with terms like intelligence, learning, concept formation etc. is still methodological questionable. The meaning of a term is constituted during that kind of discourse in which such a term is used (everyday discourse, scientific discourse...). And if e.g. a term like intelligence is used within everyday discourse, within psychology, biology, ethology, neurobiology and artificial intelligence --only to name a few-- then it can not be presupposed that the meaning of that term intelligence is always the same in all these different contexts, rather, as experience teaches us, it can 'mean' quite different things.


Otherwise, the conclusion that we, confronted with such difficult methodological problems, should remove ourselves away from the disturbing multidisciplinary isn't a way anymore. As Estes stated it clearly: "The basic reason why we not only should but must be multidisciplinary is that the evolution of the human organism has presented us with a fantastically multilayered system with old and new portions of the brain, more primitive and more sophisticated learning and behavioural mechanisms all existing and functioning side by side - sometimes in almost total independence, sometimes in intricate interactions."(Estes 1975, Vol.1, p.21).


It remains the vital question how we can manage this multidisciplinary situation in a systematic way allowing the connections of terms of models from quite different application domains. Especially, we are interested in the term intelligence functioning in different disciplines and also being used within the computational paradigm.


3. Updating the Domain for Computational Intelligence

Backed up with these considerations about the fuzzyness of the term intelligence distributed in a multitude of disciplines I want to update the definition of the domain for the discipline computational intelligence. This re-definition is in itself a complex matter. Especially it can only be done within a discourse happening in the community of researchers and engineers. Thus the following reflections should be seen as a starting point for possible discussions in the future.


Seeing all these varying aspects and terminological networks there arises the question, where to start.

As the example of the embodied intelligence movement shows they tried to remedy this fuzziness by freely picking up some aspect within the broad spectrum of possibilities and declared adaptation of bodies to be a key factor of intelligence. This is possible because everything is possible in science as long people are convinced about some rationality behind it (which doesn't imply that there really is one). The question is, whether one should accept this proposal or whether there is some interesting alternative to it.


Following the arguments of the embodied intelligence approach one can ask why limiting the domain to bodies? Individual bodies are produced during a process called ontogenesis, leading to a phenotype based on a genotype. But genes are making sense only if they are part of a population. It’s not the individual body that has to survive in the long run; it’s the population as such! And talking about the adaptation of populations leads to biological life as such. The only kind of biological life, which is known today, is DNA-based live here on earth. Thus, if intelligence should be linked with adaptation of bodies in the world why then not generalized this idea by asking for the overall patterns of adaptation on all levels of complexity (what [Pfeifer/Scheier 1999] perhaps would accept because they state that "the capacity to adapt is independent of levels" (p.21)).

A tentative hierarchy of intelligence as manifested in adaptation could be the following one (using here for brevity the abbreviation biological structure := a DNA-based structure which can reproduce itself and which can produce a phenotype based on a genotype):


  1. [Biological Adaptation] The ability to enable a biological structure

  2. [Structural Adaptation] The ability of a biological structure to change the DNA-based information by mutation and to have a partially indeterministic growth process.

  3. [Strong Behavioral Adaptation] The ability of a biological structure to adapt to locally changing conditions of the given environment by changing some of its behavioral functions

  4. [Weak Behavioral Adaptation] The ability of a biological structure to adapt to locally changing conditions of the given environment by changing some its internal states governing behavioral functions

  5. [Sign Based Adaptation] The ability within a biological structure to have at least a partial communication based on fixed signs between biological structures to enable coordination

  6. [Language Based Adaptation] The ability within a biological structure to have a communication based on free determinable symbols between structures to enable coordination


To explain these terms in full detail is beyond the scope of this section. We will give only a short outline to make visible that the time window we are living in is faced with a real planetary challenge.


Adaptation is seen here as bound to the overall phenomenon of biological life as understood as the structure of DNA-based life. The first major event leading to biological structures was surely the advent of first DNA-based cells on this planet. Until now there is no complete theory available which explains the transformation from the level of ingredient molecules to the finally working structures of DNA-based cells. The author proposes to label that environment, which enabled the origin of the first DNA-based cells as a kind of bootstrapping form of adaptation or --as proposed in the table-- as biological adaptation. After it's first advent on the surface of the planet earth the biological structures have undergone incredible changes. It worked about 3.5 Billion years to enable higher life forms, as we know today, especially the human species. The inherent properties of biological structures which made this success story possible are identified as the ability to change the genetic code during the heredity by mutation as well as to change the phenotype based on the genotype during a partially indeterministic growth process. During it's existence can a biological system learn in at least a twofold way: (i) in a strong form a biological system can change the functions which underly its behavior as such; (ii) in a weak form they cannot change the functions as such but those states which are mapped by the behavioral functions. Within the behavioral learning one can further distinguish at least two important subcases: (i) the availability of behavior which can be identified as (partial or complete) sign-based communication serving for the coordination of different systems; (ii) the availability of behavior which can be identified as (partial or complete) language-based communication serving for the coordination of different systems. The advent of those forms of life which are including language-based communication is extremely short compared to the whole history. But, as we know meanwhile, the conditions necessary for biological structures on earth will be deleted in a comparable short time of no more than 0.1 - 1 Mio years from know, because then the moon will have departed so far from earth that its stabilizing effect will be lost (if not some other catastrophe will damage the conditions for biological structures much earlier) (cf. Peter D.Ward, Donald Brownlee [2000], Rare Earth. Why Complex Life is Uncommon in the Universe). Thus there exists objectively a planetary challenge for a planetary adaptation either to become able to manage the movements of the moon or to manage the transfer of biological structures to some other acceptable place, or something else we do not know today.




Figure 2: Planetary adaptation - numbers are approximations


If one accepts this planetary view of adaptation then every other kind of adaptation becomes bound to this overall task. The fact, for instance, that the human form of biological structures has meanwhile learned to manipulate the DNA-structures and surrounding biological structures directly and "by will" has freed the slow process of evolution such that this process can become --at least in principal-- accelerated by some factor. Seen from the perspective of the planetary challenge this can perhaps be decisive to enable the survival of biological structures as such in a nearby future.


Besides that continuous departure of the moon there is some other factor introducing an evolutionary pressure: cognitive complexity compared to cognitive capacity.


If one uses the term computational complexity as used in theoretical computer science (the number of processing steps needed for to reach a solution to) as a kind of measure to classify the complexity of daily environments [complexity_env] of human systems in combination with the term cognitive capacity as the ratio of complexity_env per time unit, then one can imagine, that the biological steadiness of the individual cognitive capacity compared to the exponential growth of the complexity of daily environments deteriorates the position of the human population. Although today there exist no empirical investigation directly related to these concepts there are enough data of correlated factors which induce the conclusion that the complexity of daily environments in industrialized countries has meanwhile passed the point where the biologically given cognitive capacity can cope with this cognitive complexity (cf. figure 2). This results inevitably in something, which I will call here complexity breakdown. The complexity breakdown is a worst-case scenario of every social system based on language communication. If a society based on language communication reaches such a state it will inevitably loose all of its cognitively relevant productions beyond the fixed cognitive capacities. And indeed, as daily experiences shows us, the negative feedback effects can be much more complex; empirical research on this topic is pending.

Therefore theoretical considerations and daily experience tell us that the massive improvement of the cognitive capacities of the human population is an urgent task within an evolutionary point of view. We need a new kind of adaptation to this kind of challenge. One possibility is to trust on the implicit creative mind of ordinary evolution without using the human mind as part of this evolution. Then we have to wait probably several million years until something can happen; but there is no guarantee that something useful will indeed happen! And because we have a hard deadline imposed to us by the constant departure of the moon we have only some 100.000 years from now to find a solution. Thus, we can --as an alternative strategy-- accept the responsibility which has fallen to the human mind as product of this evolution and use this mind as some kind of accelerating factor trying out new ways of developing the biological structures of humans by genetic engineering as well as by improving the technologies assisting humans in language communication and understanding. The minimal goal of this kind of adaptation should be to keep an equilibrium of cognitive complexity with the cognitive capacities of a human population; but a future development of assistive cognitive technology for the reduction of cognitive complexity induces in itself an increase in complexity. Thus, we are somehow in a cage of cognitive complexity which will be hardened as much as we work to break out. This fact is related to Gödels famous proof from 1931. If a system S investigates a system S' where S ist part of S' then it is principially not possible to be correct and complete. If nature (as represent by human intelligence) is investigating nature (with human intelligence as part of it) there can be no complete solution. This holds even in the strongly reduced case where a brain is investigating another brain.




Figure 2: Complexity Problem


Our preliminary conclusion of these considerations is then to investigate the structure and mechanisms of language based communication and intelligence as part of a planetary evolutionary process for to become --hopefully- be able to develop new supporting technologies in assisting human populations to approach as far as possible at least an equilibrium between cognitive complexity and capacity. And to apply evolutionary strategies does not (!) imply to copy the evolutionary strategies before the advent of a brain capable of language communication but to include all evolutionary strategies after the advent of such an incredible device, i.e. including high level language communication and language based intelligence as part of evolution.


If we accept that the human brain is only part of a bigger system, the body, and the body again part of a bigger system etc. etc, then it is clear that the goal of improving the human cognitive capacity combined with some additional external artificial cognitive devices will not be a simple project. Somehow it has the flavour of a blind date with the yet unknown.



4. Asking for the Right Methodological Framework

The survey of some of the aspects involved in the question of computation and intelligence has shown a multitude of features, dynamics and different levels of abstraction and complexity. This poses the question whether there exists some canonical framework which enables a coherent representation of all these topics? The author sees such a canonical framework given in the paradigm of modern sciences. But before we deal with this paradigm directly it has to be discussed, how the paradigm of modern software engineering is related to this framework of science. This is necessary because the paradigm of software engineering seems to play the role of a scientific framework within the computational sciences without taking explicit notices of the historical fact and the methodological implications of the paradigm of modern sciences as discussed within theory of science.


4.1 The Case of Softwareengineering


In the field of software engineering, which is central to the question how to transorm real world problems into functioning software following explicit rules one can observe in the last 50-60 years a clear tendency to organize such an engineering process through the introduction of a hierarchy of levels of abstractions, where the topmost levels are represented by models (cf. figure 3).


The encoding of ideas started with the language of binary numbers. during the course of time the softwareengineering people have introduced more and more kinds of abstractions to minimize the "cognitive distance" between the common concepts of human thinking at the one hand and the encoding language at the other. After the invention of assembler languages there was a larger set of socalled higher level languages following different types of paradigms (functional, rule based, logical, neuronal, algebraic, ...), leading recently to the advent of diagram languages with UML as one of the more recent and popular exemplars. This process was accompanied with a continous elaboration of the typical steps within the process of software engineering as such. The basic distinction between requirements engineering, design and implementation led very recently to a kind of new standadization within the Model Driven Architecture (MDA) of the Object Management Group (OMG) which explicitly makes use of high level models as primary concepts to model reality. The Computer Independent Modeling (CIM) is that part o the process, which collects all available data of a problem and builts a description of the important actors, relations and objects within a natural language description enhanced with different kinds of helpful explanations. The computer independent model then is translated in a mostly abstract Platform Independent Model (PIM) which has later on to be translated/ transformed into a Platform Specific Model (PSM). Ideally one can automatically generate from a PIM a PSM. A platform specific model can either simply be a language like C/C++ or a certain operating system like LINUX or a middleware like CORBA. This short description reveals several layers of abstraction which today are quite complex, every layer on its own.




Fig. 3: The Softwareengineering Environment



The question which has to be discussed here is whether a softwareengineering paradigm like that manifested in the MAD-Approach can serve as a canonical framework for the tranformation of biological systems into running software.


Within the social sciences began a few years ago a discussion about the possibility of using the UML-Language as medium to model the phenomena of the social sciences (cf. [Thomas Kron 2002]) and here especially also the theory of Niklas Luhmann (cf. [Luhmann 1994, 1996, 1997, 2000]). Besides many general aspects there is a sufficient detailled argumentation by Marco Schmitt in this book (cf. [Marco Schmitt 2002:27-53]) showing, that the UML-Language has deficiencies for this task. The main point in his argumentation is that a modeling with UML presupposes that the structures will stay fixed after there modeling. But in the social sciences nearly all identified structures are dynamic, are subject of manifold changes.


With regard to the biological systems as we have them characterized before, we can make this more precise:


  1. Biological systems are typically input-output systems relating environmental states to inner states and vice versa. In UML Vers. 1.5 it is not possible to model input output explicitly.

  2. Biological systems can along a time axis typically be characterized with a starting point (to be born) and an ending point (to die). Between these extreme points we have the phenomenon of ontogenesis which involves growth processes. In UML Vers. 1.5 you cannot model classes which are "growing".

  3. Biological systems can adapt to environmental changes by changing their behavioral functions. In UML Vers. 1.5 you cannot change the number and kind of operations during runtime.

  4. Biological systems can reproduce their genotype by copying it, thereby changing it by mutation and then these copies can built up new phenotypes. In UML Vers. 1.5 a class cannot generate their own copies nor can these copies be mutated.


These are the main points. Although if we accept that the intention of softwareengineering was never and is not to describe reality adequately, these reported deficiencies have to be taken seriously. Softwareengineering --as its name is telling-- is engineering, that means its primary goal is to built a restricted model based on a computation independend domain model which serves as a "snapshot" of some part of reality which becomes fixed in contracts between a client and the employer of the software engineers. The produced system should be reliable, perhaps safe, and correct as well as complete, but correctness and completeness are here restricted to the contracted domain model. When the reality changes or at least the knowledge about reality expands but the contract for the domain model is still valid then the system will not be changed; its domain-correct and domain-complete. Truth is not a question here.


This short reviewing of the software engineering approach brings to the forefront that the way how a group of people is looking to reality is decisive for the kind of models which can grow out of this view and for the relationsship of these models of reality compared to the overall knowable features of reality.


The software engineering approach as depicted within the MDA-approach is translating reality in classes which are fixed structures and whithout clear input-output specifications. This is induced by the language which is recommended to the software engineers, the UML-Lanuguage.


The question is whether there exists an alternative to this paradigm? The author sees such an alternative in the historical older and methodological more general approach labeled as theory of science.


4.2 The theory of Science Point of View


Science has a long history and we will look to the phenomenon of science through that discipline, which is dealing with this subject officially, that is Theory of Science.


Theory of Science is different to Philosophy of Science. Philosophy of Science is part of philosophy and inherits therefore all the habits of the philosophers. A minimalistic understanding of philosophy sees philosophy as raising only questions and not elaborating systematic answers. An ingenious prototype of such a kind of philosophy (besides others) is surely Ludwig Wittgenstein (1889 – 1951) in his reflections following his famous Tractatus Logico-Philosophicus (1922). But there are other great philosophers –and probably the major part of them-- which understand philosophy not as only restricted to raise questions but also –and probably mainly-- trying to find the general conditions of the human mind and to elaborate at least a systematic framework within which all kinds of human knowledge have to be reflected. Examples for this kind of understanding –besides many others-- are Aristotle (384 – 322 BC), Kant (1724 – 1804), Hegel (1770 – 1831), and Husserl (1859 – 1938). They all raised “deep” questions, but they also tried to systematize the results in different kinds of frameworks.


Different to this is theory of science. Although there are some prominent forerunners of a Theory of Science like e.g. F.Bacon (1561 – 1626), Whewell (1794-1866), Duhem (1861 – 1916), and Mach (1838 – 1916), it is common opinion that modern Theory of Sciences starts rather with the so-called Wiener Kreis (Vienna Circle) during the 1920's (most prominent: Carnap, (1891 – 1970) von Neurath (1882 – 1945), Schlick (1882 – 1936)). These ideas have then been discussed, enriched and modified by several scholars (see: Hempel, Bridgman, Dingler, Popper, Hanson, Kuhn, van Fraassen, Giere, P.Suppes, Salmon, Kitcher, Lakatos, Glymour, de Finetti, Howson, and Urbach). What makes the great distinction to classical philosophy is the focus on explicating the nature of the knowledge as it is involved in the explication of empirical phenomena using experiments, measurement, formal languages, formal logic and formal mathematics as used in communities of researchers. Theory of sciences is also investigating the meta communications, which enables the participants to establish (“bootstrapping”) their scientific communications (for an overview of these developments see [MITTELSTRASS Vol.4], Frederick Suppe (1979) The Structure of Scientific Theories), as well as Balzer (1982) and Balzer et al (1987).


From theory of Science I take the following general view as that what I will call here the Scientific Framework (cf. Fig. 4).




Fig. 4: The Scientific Framework



The root of every scientific explanation is given by a community of scientists (here also including the engineers for completeness). The community of scientists has to clarify what they will understand as their domain of investigations. There is no domain of investigation independently of a community! To come to a common understanding of the domain of investigations the community has to communicate. This communication includes inevitably some portion of meta communication, i.e. those kinds of communications, which are necessary to establish ordinary communication. Only if the communication is successful then there exists some domain of investigations for the community. For the scientific investigation of some domain of interest one needs some method of measurement. Together with such a method of measurement the community has to accept some language for the representation of measured items, a language for data [LDATA]. Using measurement and a data representation language the community can produce data. Because data are by nature individually/ concrete/ local the community needs for general statements some more general structures. To establish such general structures the community has to agree about some language for theoretical statements [LTHEORY], which can include the data language as a subset. Aided with a theory language the community can establish general structures or models to express general laws. One can distinguish in the realm of theoretical models between the potential models (P-Models) and the axiomatized models (A-Models). The A-Models differ from the P-Models only by some axioms which are included. P-Models contain only sets and relations. To operate with general structures and single statements the community has also to agree about some kind of logic, which allows deductions leading to proofs. Equipped with theory language, data language, general structures and a logic the community can prove statements and theorems, which can be compared to the measured data. If the set of possible proved statements is not in direct contradiction to sets of measured data (including error measures) the general model has some validity; if the contradictions are becoming “severe” (when there will be reached a point which tells that the general structure is no longer valid?) Then the general structure has to be modified.


Based on available general structures the engineers can try to establish new structures using additional data to transform these general structures into concrete instances of these general structures within the real world. This happens within a process of engineering, which is today highly regulated. The engineering and the usage of those concrete instances produces a lot of knowledge, which to a great extent will usually not be explicitly included in the founding models. This is a kind of experience, which is important and lives with certain human persons; it’s bound to them. Thus the official publications will only represent a certain fraction of all the knowledge, which is influential in the whole process of science and engineering.


After this short outline of the theory of science point of view we have to pose the question whether we can set up the case of biologicval systems within this framework more adequately than within the present software engineering point of view?


The discussion of the UML-Language showed that the limits of the language suppressed some important features of the domain of investigation. Therefore we have to look for the main properties of the domain under investigation and to see whether and how these can be expressed in some theory language LX.


The simplest case is the talk about input-output systems (IO-Systems). In that case we could assume a structure like <<I,IS,O>,<f>>, where f is the system-function f: I x IS x O --> O mapping the input states I, the internal states IS and the output states O into output states. This would also include systems with weak learning (WL). Organizing within the internal states IS some memory-like structure would enable those systems to change their responses according to past experience.


To realize strong learning (SL) one has to make additional assumptions. First one has to assume that there is a set of possible system functions F as part of the structure together with a special learning function sl, which can select different actual system functions f: <<I,IS,O,F>,<sl, f>>, where f is the actual system-function f: I x IS x O --> O and sl is the strong learning function mapping potential functions into the real system function: sl: F x {f} x I x IS x O --> F. This implies that part of the internal states IS are some states V which can be used for a kind of evaluation which other system function f' could be better than the actual system function f. One has to consider that the strong learning function sl operates on another language level than f!


The process of ontogenesis, which includes growing, extends the description still further. Here we need systems as objects and a growth-function operating upon these sytem-objects. One has to assume a minimal structure like <<PROP, PSYS, SYS, GEO>,<pos, fenv, birth, death, stim, resp, growth>> which could be interpreted as a structure having sets of environmental properties PROP, a set of potential systems PSYS, a set of actual systems SYS and some kind of a geometry GEOM. The position-function pos is mapping properties and system into the geometry. The environmental function fenv maps the environment onto itself. The birth-function birth creates new actual system out of potential systems, the death-function death deletes actual systems. The stimulus-function stim maps environmental properties onto systems and the response funtion resp maps system-properties onto the environment. Finally the growth-function growth maps potential and actual systems into actual system thereby eventually also changing the geometry.


The process of heredity including mutation would be a minimal extension of the foregoing structure. Such a minimal structure could look like <<PROP, PSYS, SYS, GEO, GENOM>,<pos, fenv, birth, death, stim, resp, growth, heredity >> where GENOM is a set of possible genoms which can be attached to a system and the function heredity can map from the set GENOM into the set GENOM and thereby inducing some kinds of mutations. The func tion growth will translate a genom into a new system.


Thus, we can see, that in principal it seems possible to find some language to represent all these properties discussed so far as necessary for biological systems, although the technical details of doing this job with regard to the different language levels could be a bit laborious.


But this is not the complete answer of the triggering question. To state that from the general point of view of theory of science it seems possible to represent all phenomena in question in some kind of a formal language has to be narrowed down to the question whether this representation also holds if one has to transform the represented phenomena into computable models, because this is what one needs within the computational sciences.





4.3 Computable Models of Biological Systems?

One of the most popular measures for the property of being computable is the possibility to map a problem onto a universal turing machine (UTM). Thus to answer the above question whether to represented properties of biological systems can be mapped into a computable representation can be reached by analyzing whether these mentioned properties of biological systems can be mapped into the concept of a UTM. Below it will be shown how one can do such a mapping (cf. figure 5).





figure 5: universal turing machine modeling a biological system



The basic idea of this mapping is to distinguish between descriptions and executions. Descriptions are structures representing some domain written in a certain language. Executions are parts of the machine table of a UTM which can translate a certain description into a sequence of turing machine operations. Thus executions can be understood as a kind of operational semantics for the descriptions.


In the diagram one can see a typical setting for the modeling of biological systems as discussed before. On the tape of the UTM you can find all necessary descriptions and in he machine table you will find all the executions.

One part of the tape is dedicated to the input-output data of a system which represents the environment ENV for the system. Then, in the case of input-output systems (IO-Systems) with the assumed structure <<I,IS,O>,<f>>, you can find the description of the system as a turing machine TM including the system function f. Possible additional internal states IS are written on the tape close to the machine description. The execution labeled execTM,IS will interprete these descriptions and will simulate the behavior of the system "as it is".


In the case of IO-systems with strong learning <<I,IS,O,F>,<sl, f>> there is the additional meta-function sl: F x {f} x I x IS x O --> F. One has to extend the description on the tape with a description of the function sl combined with a description of the set of possible functions F. The execution of the additional strong learning function sl needs a special execution block called execSL,F which can read the description of sl and F and which can change the description of TM and f.


In a next step you can extend the description by the growth function growth which is combined with the genetic information in the genom GENOM: growth: TM x IS x {f} x {sl} x F x GEO --> TM x IS x F x GEO. If one would assume that the strong learning function as such can also change during the growth process --which is somehow likely-- then one should assume an addition set of possible strong learning functions SL. analogously as in the case of the strong learning function sl one has to assume an special execution block execgrowth,GENOM which can read the growth description and can transform the descriptions of TM, IS, f, SL, F accordingly.


Finally one can add the description of the herededity function heredity: GENOM x growth --> GENOM x growth. The related execution block execheredity can read this description and can make a copy of the GENOM-description together with the growth-description. This copy can include mutations. When the copy has been finished th growth-function can start to built up the descriptions of TM, f, SL, and F out of the GENOM.


As one can see from these considerations one needs at least 9 different levels of language which are interacting in a complex manner.


If one compares these findings with the difficulties reported before using the UML-language one can see that the difficulties of the UML language are caused by (i) a very limited language as such, (ii) the restriction to descriptions only, (iii) the restriction to only one level of language.


Despite the difficulties with the UML-Language one has to conclude from the considerations above that the modeling of biological systems as computable systems can be done within a formal framework which is formally equivaent to an UTM possessing all the necessary levels of descriptions and executions.



5. Communication as Part of Adaptation


In the wonderful book of [Pfeifer/Scheier 1999] you will not find the topic of communication as such! Within 660 pages of interesting stuff abut embodied intelligence there is no chapter, not even a paragraph about communication as such (also some aspects involved in communication are discussed). This is striking. It demonstrates that communication as such is not yet that common topic within an adaptation based intelligence research as it --perhaps-- should be. Historically this is understandable because the embodied intelligence paradigm was somehow in opposition to that kind of cognitive psychology, computational linguistics and artificial intelligence which dealt with symbolic communication independently of the frame of overall adaptation. But systematically seems this exclusion of communication from the subject of embodied intelligence to be an overreaction, not only attacking some questionable research paradigms but also destroying the phenomenon as such, the empirical phenomenon of intelligence which is manifesting itself by communicative processes.


Despites this silence about communication in the book of [Pfeifer/Scheier 1999] one has to accept that besides many marvellous structures which research during the last 100 years has revealed for us as being part of the game of life there is one phenomenon which seems to be at the heart of intelligence: communication! As Marc D.Hauser describes in his book The Evolution of Communication it seems that communication is a key factor for all kinds of biological structures and it has undergone an evolution as all other kinds of biological structures. And to that extend that communication is identified as a key element of adaptation and therefore for intelligence it is highly interesting to understand levels of communication and the different potentialities of communication systems. There are good reasons to state that the change from sign-bound communication to language-based communication demarcates a far-reaching breakthrough during the process of evolution. Language-based communication is bound to highly complex neurological and physiological structures accompanied by complex behavioural and social skills. Thus the understanding of language-based communication is connected to many other disciplines. But language-based communication manifests the highest form of intelligence, which is known today. The advent of language processing machines in connection with language-based biological life prepares a new exciting scenario for the manifestation of intelligence by processing adaptation.


This case of an adaptation based intelligence including language based communication has clearly to be distinguished from the before mentioned and criticized position of the so-called symbolic AI or a cognitive psychology following the paradigm of an information processing organism. In the adaptation-based view is language communication part of a population of biological systems, which have evolved to be able to survive in a complex dynamic environment. And what we have learned from the historical process of evolution is that the mastering of the complex dynamical environment depends on social, political, economical and technological structures which all are massively based on knowledge and communication. To keep the planet earth balanced will need much more advanced structures than we have available today. Thus we are not in need of more symbolic AI --although this can be useful for special tasks-- but in need of a substantial extended embodied intelligence incorporating the planetary dimension as well as language communication.


What has to be said within this framework of adaptation based biological systems? Let us consider a basic input-output system with rudimentary communication (BIOC) (cf. figure 6).


To speak about a BIOC we assume at least some input states I resulting from a stimulation of the system by environmental states (stim: S --> I) as well as some output states O which can be manifested to the environment as responses R (resp: O --> R). While assuming that the input-states I and the output-states O are subsets of the internal states IS (I u O C IS ) the system function f is mapping the internal states IS into the output-states O (f: IS --> O). If we further assume that the cut between I u O and IS is not empty (I u O IS ) then there are some internal states which are not input or output states. In this case there exist states in the BIOC which are hidden from the outside. There are several biological relevant situations where it is advantegeous for members of a population to be able to coordinate their actual inner states with those of the other members (mating, predators, feeding ...). Thus an adaptation in the direction that some of the input states could represent certain relevant features to map into certain output states functioning as signs could improve fitness. Therefore a specialization of S into stimuli which serve as communication relevant (SS) and those, which don't (~SS) (S = SS u ~SS) would evolutionary be "good". And this specialization has to happen simultaneously throughout the whole system including the input states (I = SI u ~SI), the output states (O = SO u ~SO) as well as the responses (R = SR u ~SR). The system function f has also to be specialized for to be able to establish specialized meaning relations in the kind fS: IS --> SO including naturally fS: SI --> SO.




Figure 6: Basic I/O-system with communication structure



Clearly, communication is making only sense when there is a coupling of systems such that at least the output RA of one system A can be a processable input SB by another system B (cf. figure 7).




figure 7: Partially communicating systems; coordination of output and input


Udo Figge points out in his article Semiotic principles and systems: Biological foundations of semiotics [Winfried Nöth 1994:25-36] that half-communicating systems are the forerunners for full communicating systems. As in cases of mating coordination there exists systems where the inner state of mating motivation can be manifested in certain intraspecific outputs, thus making the inner state perceptible on the surface. This output makes only sense if there are other systems which can receive these outputs as specific signals and process them accordingly. This is a minimal form of semiotic coupling. Originally can such a semiotic coupling be processed with general mechanisms of response and stimulation. Later on can specialization take over and can produce step by step more and more complex structures for the production of sign-specific output as well sign-specific inner processing.


The whole -and very exciting-- story of evolutionary mechanisms involved in the shaping of communicative structures for the coupling of inner states between the members of a population as well as between predators and preys cannot be told here (see e.g. Marc Hausers excellent book The Evolution of Communication). Here it should only be stressed that communication is a substantial part of biological adaptation and thereby an essential part of intelligence. Thereore we will analyze the point of communication a bit more under the heading of Semiotics and Computational Semiotics.


6. Computational Semiotics as part of Computational Intelligence


One disciplin which tries to combine the embodied computational intelligence approach with communication is computational semiotics, bringing together semiotics and computation.


Semiotics refers primarily to some concept of a symbol or sign combined with methods on how to work with this concept. But up until now (cf., e.g., the excellent handbooks of [Nöth 1985; 2000] as well as [Boussiac 1998]), we have a great wealth of ideas and concepts related to semiotics but no clear-cut and commonly accepted theory. Thus speaking about semiotics implies some ambiguity, and the position an author ultimately assumes will inevitably be determined by his own preferences.


This author strongly prefers to employ modern prerequisites for scientific theory as a conceptual framework for the discussion. And to support this 'bias', he prefers using the writings of Charles Morris (1901-1979) as a main point of reference for discussion of the connection between emodied computational intelligence and semiotics. No exclusion of other 'great fathers' of modern semiotics is intended (Peirce 1839-1914, but also de Saussure 1857-1913, von Uexküll 1864-1944, Cassirer 1874-1945, Bühler 1879-1963, and Hjelmslev 1899-1965, to mention only the most prominent). But Morris seems to be the best starting point for the discussion because his proximity to the modern concept of science makes him a sort of bridge between modern science and the field of semiotics.

At the time of the Foundations in 1938, his first major work after his dissertation of 1925, Morris was already strongly linked to the new movement of a 'science of the sciences' (cf. the theory of science topic mentioned before), which was the focus of several groups connected to the Vienna Circle, to the Society of the History of the Sciences, to several journals and conferences and congresses on the theme, and especially to the project of an Encyclopedia of the Unified Sciences (cf., e.g., Morris, Charles W. [1936]).

In the Foundations he states clearly that semiotics should be a science, distinguishable as pure and descriptive semiotics (cf. Morris 1977: 17, 23), and that semiotics could be presented as a deductive system (cf. Morris 1977: 23). The same statements appear in his other major book about semiotics (Morris 1946), in which he specifically declares that the main purpose of the book is to establish semiotics as a scientific theory (M. 1946:28). He makes many other statements in the same vein.

To reconstruct the contribution of Charles Morris within the theory concept is not as straightforward as one might think. Despite his very active involvement in the new science-of-the-sciences movement, and despite his repeated claims to handle semiotics scientifically, Morris did not provide any formal account of his semiotic theory. He never left the level of ordinary English as the language of representation. Moreover, he published several versions of a theory of signs which overlap extensively but which are not, in fact, entirely consistent with each other.

Thus to speak about 'the' Morris theory would require an exhaustive process of reconstruction, the outcome of which might be a theory that would claim to represent the 'essentials' of Morris's position. Such a reconstruction is beyond our scope here. Instead, I will rely on my reconstruction of Morris's Foundations of 1938 (cf. Döben-Henisch 1998) and on his basic methodological considerations in the first chapter of his 'Signs, Language, and Behavior' of 1946. (cf. figure 8).



Figure 8: Morris theory - sign user as an interpreter

As the group of researchers, we assume Morris and the people he is communicating with. As the domain of investigation, Morris names all those processes in which 'signs' are involved. And in his pre-scientific view of what must be understood as a sign, he introduces several basic terms simultaneously. The primary objects are distinguishable organisms which can act as interpreters [I]. An organism can act as an interpreter if it has internal states called dispositions [IS] which can be changed in reaction to certain stimuli. A stimulus [S] is any kind of physical energy which can influence the inner states of an organism. A preparatory-stimulus [PS] influences a response to some other stimulus. The source of a stimulus is the stimulus-object [SO]. The response [R] of an organism is any kind of observable muscular or glandular action. Responses can form a response-sequence [<r_1, ..., r_n>], whereby every singly intervening response r_i is triggered by its specific supporting stimulus. The stimulus-object of the first response in a chain of responses is the start object, and the stimulus-object of the last response in a chain of responses is the goal object. All response-sequences with similar start objects and similar goal objects constitute a behavior-family [SR-FAM]. Based on these preliminary terms he then defines the characteristics of a sign [SGN] as follows: 'If anything, A, is a preparatory-stimulus which in the absence of stimulus-objects initiating response-sequences of a certain behavior-family causes a disposition in some organism to respond under certain conditions by response-sequences of this behavior-family, then A is a sign.' (Morris 1946:10,17). Morris stresses that this characterization describes only the necessary conditions for the classification of something as a sign (Morris 1946:12).

This entire group of terms constitutes the subject matter of the intended science of signs (= semiotics) as viewed by Morris (Morris 1946:17). And based on this, he introduces certain additional terms for discussing this subject.


Already at this fundamental stage in the formation of the new science of signs, Morris has chosen 'behavioristics', as he calls it in line with von Neurath, as the point of view that he wishes to adopt in the case of semiotics. In the Foundations of 1938, he stresses that this decision is not necessary (cf. Morris 1971: 21) and also in his 'Signs, Language, and Behavior' of 1946, he explicitly discusses several methodological alternatives ('mentalistic', 'phenomenological' [Morris 1946:30 and Appendix]), but he considers a behavioristic approach more promising with regard to the intended scientific character of semiotics.

From today's point of view, it would no longer be necessary to oppose these different approaches to one another, but as this was the method used by Morris at that time, we will follow his lead for a moment.

Morris did not mention the problem of measurement explicitly. Thus the modes of measurement are restricted to normal perception; i.e., the subjective (= phenomenological) experience of an intersubjective situation restricted to observable stimuli and responses and the symbolic representation has been done in ordinary English (=L1) without any attempt at formalization.


Clearly, Morris did not limit himself to characterizing in basic terms the subject matter of his 'science of signs' but introduced a number of additional terms. Strictly speaking, these terms establish a structure which is intended to shed some theoretical light on 'chaotic reality'. In a 'real' theory, Morris would have 'transformed' his basic characterizations into a formal representation, which could then be formally expanded by means of additional terms if necessary. But he didn't. Thus we can put only some of these additional terms into ordinary English to get a rough impression of the structure that Morris considered of being important.

Morris used the term interpretant [INT] for all interpreter dispositions (= inner states) causing some response-sequence due to a 'sign = preparatory-stimulus'. And the goal-object of a response-sequence 'fulfilling' the sequence and in that sense completing the response-sequence Morris termed the denotatum of the sign causing this sequence. In this sense one can also say that a sign denotes something. Morris assumes further that the 'properties' of a denotatum which are connected to a certain interpretant can be 'formulated' as a set of conditions which must be 'fulfilled' to reach a denotatum. This set of conditions constitutes the significatum of a denotatum. A sign can trigger a significatum, and these conditions control a response-sequence that can lead to a denotatum, but do not necessarily do so: a denotatum is not necessary. In this sense, a sign signifies at least the conditions which are necessary for a denotatum but not sufficient (cf. Morris 1946:17ff). A formulated significatum is then to be understood as a formulation of conditions in terms of other signs (Morris 1946:20). A formulated significatum can be designative if it describes the significatum of an existing sign, and prescriptive otherwise. A sign-vehicle [SV] can be any particular physical event which is a sign (Morris 1946:20). A set of similar sign-vehicles with the same significata for a given interpreter is called a sign-family.

We will restrict our discussion of Morris to the terms introduced so far.

The fact that Morris did not translate these assumptions into a formal representation is a real drawback, because even these few terms contain a certain amount of ambiguity and vagueness.

Morris did not work out a computational model for his theory. At that time, this would have been nearly impossible for practical reasons. Besides, his theory was formally too weak to be used as a basis for such a model.

In what follows we will formalize the concept of Morris to such an extend that we can map it onto the concept of a turing machine, and vice versa. This will be done with the intention to show that a computational form of semiotics is not only possible but even it must not necessarily be a reduction.

First, one must decide how to handle the 'dispositions' or 'inner states' (IS) of the interpreter I. From a radical behavioristic point of view, the interpreter has no inner states, but only stimuli and responses (cf. my discussion in Döben-Henisch 1998). Any assumptions about the system's possible internal states would be related to 'theoretical terms' within the theory which have no direct counterpart in reality. If one were to enhance behavioral psychology with physiology (including neurology) in the sense of neuropsychology, then one could identify internal states of the system with (neuro-)physiological states (whatever this would mean in detail). In the following, we shall assume that Morris would accept this latter approach. We shall label such an approach an S-N-R-theory or SNR-approach.


Within an SNR-approach, it is in principle possible to correlate an 'external' stimulus event S with a physiological ('internal') event S' as Morris intended: a stimulus S can exert an influence on some disposition D' of the interpreter I, or, conversely, a disposition D' of the interpreter can cause some external event R.

To work this out, one must assume that the interpreter is a structure with at least the following elements: I(x) iff x = <IS, <f1, ..., fn>, Ax>; i.e., an x is an Interpreter I if it has some internal states IS as objects (whatever these might be), some functions fi operating on these internal states like fi: pow(IS) ---> pow(IS), and some axioms stating certain general dynamic features. These functions we will call 'type I functions' and they are represented by the symbol 'fI'.


By the same token, one must assume a structure for the whole environment E in which those interpreters may occur: E(x) iff x = <<I,S,R,O>, <p1, ..., pm>, Ax>. An environment E has as objects O, at least some interpreters I O, and something which can be identified as stimuli S or responses R (with {S,R} O) without any further assumptions about the 'nature' of these different sets of objects. Furthermore, there must be different kinds of functions pi, e.g.:


  1. (Type E Functions [fE]:) pi: pow({I,S,R,O}) ---> pow({I,S,R,O}), stating that there are functions which operate on the 'level' of the assumed environmental objects. A special instance of these functions would be functions of the type pi*: pow(S) ---> pow(R);

  2. (Type E u I Functions [fE u I]:) pj: pow(S) ---> pow(ISI), stating that there are functions which map environmental stimuli events into internal states of interpreters (a kind of 'stimulation'-function); and

  3. (Type I u E Functions [fI u E]:) pj: pow(ISI) ---> pow(R), stating that there are functions which map the internal states of interpreters into environmental response events (a kind of 'activation'- or 'response'-function).

The overall constraint for all of these different functions is depicted in the diagram 'SNR-MORPHISMS'. This shows the basic equation fE = fE u I o fI o fI u E, i.e. the mapping fE of environmental stimuli S into environmental responses R should yield the same result as the concatenation of fE u I (mapping environmental stimuli S into internal states IS of an interpreter I), followed by fI (mapping internal states IS of the interpreter I onto themselves), followed by fI uE (mapping internal states IS of the interpreter I into environmental responses R).

Even these very rough assumptions make the functioning of the sign somewhat more precise. A sign as a preparatory stimulus S2 'stands for' some other stimulus S1 and this shall especially work in the absence of S1. This means that if S2 occurs, then the interpreter takes S2 'as if' S1 occurs. How can this work? We make the assumption that S2 can only work because S2 has some 'additional property' which encodes this aspect. We assume that the introduction of S2 for S1 occurs in a situation in which S1 and S2 occur 'at the same time'. This 'correlation by time' yields some representation '(S1'', S2'')' in the system which can be 'reactivated' each time one of the triggering components S1' or S2' occurs again. If S2 occurs again and triggers the internal state S2', this will then trigger the component S2'', which yields the activation of S1'' which in turn yields the internal event S1'. Thus S2 -> S2' -> (S2'', S1'') -> S1' has the same effect as S1 -> S1', and vice versa. The encoding property is here assumed, then, to be a representational mechanism which can somehow be reactivated.

After this rough reconstruction of MORRIS's possible theory of a sign agent as an interpreter we shall compare Morris's interpreter with Turing's computing device, the Turing machine, to show, that between this concept of a sign agent and the concept of a universal computing machine can be establish a direct mapping.

In the case of the Turing machine we have a tape with symbols [SYMB] which are arguments for the machine table [MT] of a Turing machine as well as values resulting from computations; i.e., we have MT: pow(SYMB) ---> pow(SYMB).

In the case of the interpreter [I], we have an environment with stimuli [S] which are 'arguments' for the interpreter function fE that yields responses [R] as values, i.e., fE: pow(S) ---> pow(R). The interpreter function fE can be 'refined' by replacing it by three other functions: fEI: pow(S) ---> pow(ISI), fI: pow(ISI) ---> pow(ISI), and fIE: pow(ISI) ---> pow(R) so that

fE = fEI o fI o fIE.

Now, because one can subsume stimuli S and responses R under the common class 'environmental events EV', and one can represent any kind of environmental event with appropriate symbols SYMBev, with SYMBev as a subset of SYMB, one can then establish a mapping of the kind

SYMB <--- SYMBev <----> EV <---- S u R.



Figure 9: Morris theory - SNR morphism


What then remains is the task of relating the machine table MT to the interpreter function fE. If the interpreter function fE is taken in the 'direct mode' (the case of pure behaviorism), without the specializing functions fE u I etc., we can establish directly a mapping

MT <---> fE.



Figure 10: Morris theory - Turing Machine and Sign Agent as Interpreter

The argument for this mapping is straightforward: any version of fE can be directly mapped into the possible machine table MT of a Turing machine, and vice versa. In the case of an interpreted theory, the set of 'interpreted interpreter functions' will very probably be a true subset of the set of possible functions.


If one replaces fE by fE = fE u I o fI o fI u E, then we must establish a mapping of the kind

MT <---> fE = fE u I o fI o fI u E

The compound function fE = fE u I o fI o fI u E operates on environmental states EV --in case of the TM the symbols on the tape-- and on the internal states IS of the interpreter. The internal states are in case of the TM those symbols on the tape which the TM can "write" on its own independently of externally provided symbols.

But what about the function fE u I o fI o fI u E as such? The machine table MT of the Turing machine is 'open' to any interpretation of what 'kind of states' can be used to 'interpret' the general formula. The same holds true for Morris. He explicitly left open which concrete states should be subsumed under the concept of an internal state. The 'normal' attitude (at least today) would be to subsume 'physiological states'. But Morris pointed out (Morris 1946:30) that this might not be so; for him it was imaginable also to subsume 'mentalistic' states. And as has been pointed out above, it would not be difficult formally to integrate 'phenomenalistic' states. Thus, what all of these possible different interpretations have in common is to posit that states exist which can be identified as such and therefore can be symbolically represented. From this it follows that we can establish a mapping of the kind

SYMB <--- IS.

Because the tape - but not the initial argument and the computed value - of a Turing machine can be infinite, one must assume that the number of distinguishable internal states IS of the interpreter that are not functions can also be 'infinite' (although in the case of the interpretation of the IS as physiological states the cardinality of IS would be only finite!). This assumption makes sense: though it cannot be formally proved, it cannot be disproved either.

What remains is the mapping between MT and the compound interpreter function fE u I o fI o fI u E. The only constraint on a possible interpretation of the states of a machine table MT is the postulate that the number of machine table states must be finite. In the case of the compound interpreter function fE u I o fI o fI u E (which can be considered as a synthesis of many small partial functions), it also makes sense to assume that the number is finite. It is thus fairly straightforward to map any machine table into a function fE u I o fI o fI u E and vice versa.

Thus we have reached the conclusion that an exhaustive mapping between Turing's Turing machine and Morris's interpreter is possible (although a concrete interpreter will normally be more restricted). This is a far-reaching result. It enables every semiotician working within Morris's semiotic concept to use the Turing machine to introduce all of the terms and functions needed to describe semiotic entities and processes. As a by-product of this direct relationship between semiotics 'à la Morris' and the Turing machine 'à la Turing', the semiotician has the additional advantage that any of his/her theoretical constructs can be used directly as a computer program on any computational machine. Thus 'semiotics' and 'computational semiotics' need no longer be separately interpreted, because what they both signify and designate is the same. Thus, one can (and should) claim that

semiotics = computational semiotics.

The short discussion above, occasioned by Morris's characterizations of a 'sign' and of 'semiosis' shows that such a definition is far from trivial. It implies a complex mechanism of conditions and functions which must be worked out sufficiently clearly, an undertaking which has yet to be attempted.



7. The Computational Challenge of Symbolic Knowledge

In this final section I will like to sketch the main challenge to compuational semiotics --and thereby to computational intelligence in general-- seen within the overall framework of evolutionary adaptation attached to a planet with a hard deadline for biological structures. If the foregoing reflections are right then it seems that recent forms of intelligence manifesting itself within systems using language based communication and science to manipulate their environments and their own "genetic program" are able to accelerate the process of adaptation --including inevitable riscs--, but in the actual historical phase they are slowed down by their actual biological limits to process knowledge.


In figure 11 I have collected some of the main facets of the knowledge space which I suppose to be relevant.




Figure 11: Facets of the Knowledgespace



The upper part of the diagram gives a layout of the ideal knowledge space if it would be organized according to the principles of scientific theories as well as computing algorithms. The active nodes in this space are human persons and the today available simple computer programs (including so-called 'intelligent' programs) acting as knowledge processors. These knowledge processors have until now very narrow processing limits. Only the technical knowledge processors can momentarily substantially be improved and in this sense adaptively changed. The individual human knowledge processors are organized in thousands of thousands small working groups worldwide following highly specialized rules for the production of primary empirical knowledge. In the ideal case these specialized individual empirical knowledge sources would continously being integrated in a commonly shared formal framework providing a universal empirical knowledge base for a universal empirical theory of empirical processes. This integrated empirical knowledge base has furtheron continously being extrapolated into the space of possible processes. This space of possible processes has also continously to be evaluated because otherwise the posible process can not be used. The evaluation has to be feeded back into the integrated knowledge space as a kind of commentary enabling the individual working groups to get a "feeling" about the possible direction of the overall adaptation process.


The lower part cites some aspects of the media through which the ideal knowledge space has to be processed. Clearly depends the quality and velocity of the communication process directly on these different kinds of media which are used today for communication. You have your everyday environment of direct personal talks, but this is enriched by lots of secondary artifacts like papers, books, diagrams, protocols and the like. Artifacts, whose production and implicit policies you cannot control by yourself. Then you have the mass media which fill up your sensory channels with lots of information which are not intended to support the ideal knowledge space and where the control of the conditions of production are much more worth. Thus the ideal knowledge space is embedded in a very noisy and fuzzy channel of non-scientific knowledge which can it make very hard or even impossible to keep track of what is really going on in the scientific knowledge process.


Besides the dayly problems of noisy and fuzzy communications induced by the sheer amount of data meanwhile available we have to accept that the concept of the ideal knowledge space is not only far from beeing realized but moreover it is far from being a topic of the worldwide science community, not to speak of the worldwide community as such. Integration of knowledge happens until now only within the narrow borders of distinct disciplines and the question of continous extrapolation and evaluation is until now no active topic anywhere, not in research and not in teaching (the famous Global 2000 report to the President following the tradition of the club of Rome Reports of 1972 and 1992 did not have that impact on the way how we organize knowledge. The today known database systems as well as the socalled enterprise resource systems are not more than the "stone age" of knowledge spaces). And indeed, there have to be solved many demanding institutional, social, political, cultural and economical problems --to mention only some of the major aspects-- to become able to organize a worldwide ideal knowledge space. I will discuss here only those aspects which are related to the point of view of Computational Semiotics.


Let us start with the symbol grounding problem. As starting point for this discussion is usually cited [Harnard 1990]. But, in fact, the discussion can traced back to Aristotles Categoriae where he discusses the relationship between experience, language and concepts. The discussion is spreading through the centuries and --besides many other places-- I would also include here Kants famous1 Kritik der reinen Vernunft (transl.: Critic of pure Reasons) where he analyzes the relationship between concepts and empirical experience.


In the simplyfied model shown in figure 12 does Symbol grounding cover the case that human persons are able to process a language perception O_lang together with a non-language perception O_other in a way that both perceptions can internally somehow be connected as a meaning association such that after the internal setup of such a meaning connection a certain language-perception O_lang can trigger the output of O_other and vice versa. This processing of meaning associations is completely situation dependend, fuzzy and inductive. To understand the meaning associations of a certain human system H1 one has to participate in sufficiently similar siatutions and one has to be aware of the special conditions under which the processing happens. The concrete encoding of these associations can by no means be deduced from some preexisting abstract model. Thus Symbol Grounding is somehow a bootstrapping process which cannot be circumvented. This kind of learning is inevitable for the setting up of primary forms of smbolic knowledge.


Seen from another point of view one can imagine other types of symbol grounding environments. As you can see in figure 13 is the usual process of Symbol Grounding interacting with the real world, because the real world serves as a kind of universal interface for the participting systems. One could also speak of the real world as a kind of bus-system connecting different kinds of subsystems. Analogously one can view the modern World Wide Web (WWW) as such a universal interface or bus-system.





Figure 12: Symbol Grounding, idealized




Figure 13: Symbol Grounding in different environment



Clearly it is imaginable to organize a symbol grounding process interacting with the WWW. In this case the different kinds of perceived objects have to be presented in a way, that the setting up of a meaning association is possible. Using the WWW --or any other kind of artificial medium-- for organizing symbol grounding processes as such is comparatively simple. But the interesting question here is another one, namely whether and how those alternative interfaces could contribute to the coupling of human based symbol grounding processes interacting with the real world and artificial symbol grounding processes.


As the robotic community realized very lately and up until today only partially depends a language based communication between human processors and artificial processors mainly from the availability of a commonly shared language. Whereas a human processor is genetically programmed for symbol grounding by interaction with the real world are artificial systems until now not programmed for this. We even do not really know how a program should look like for such a task. Besides technical details to be solved is the main question how one can synchronize the symbol grounding of human and artificial processors. In case of human processors we know that the perception process as well all the other subprocesses (abstractions, associations, memorization etc.) have to be sufficient isomorphic between different human processors, because the involved internal states of different human processors can not be used directly for the coordination of the symbol grounding process. Thus the first interesting milestone in real language based man-machine communication will be reached if we can construct artificial processors which have the necessary isomorphic processing capability, otherwise a real coupling between human processors and artificial processors can not happen. To reach this milestone a lot of work has to be done. Let us name these artificial processors AP1.


Now let us assume --as a thought experiment-- that at some time in the future we will be able to program artificial processors to be able to keep up with the symbol grounding processes of human processors. Although a complete theory of human cognition is still lacking --and by principal reasons perhaps will never be complete-- we know at least as much that human processors besides symbol grounding can set up lots of abstract models which can be used to classify objects, events and relations. From the abstract models humans can also infere/ deduce concrete cases as instantiations. According to the dynamic and noisy perceptions are alle involved processes highly adaptive and fuzzy like. Because the meaning of more abstract and theoretical terms will be depend on these dynamically abstracting and infering processes an artificial processor has to cope with these structures to couple his meaning space with the meaning space of human processors. Thus in this case too one has to assume an isomorphism because a direct coordination is not possible. The construction of those isomorphic processes depends on knowledge about the human processor which seems to be not yet available. Let us name these artificial processors AP2.


Let us look further ahead and let us assume that somehow in the future we will have solved the problem of coupled abstraction and instantiation as part of language based communication. The interesting question then would be whether this would help us in reducing the before mentioned complexity problem. The complexity problem is induced on human processors on account of their clear capacity limits with regard to symbol grounding, abstraction, inference, extrapolations, evaluations, updating etc. Presupposing the availability of AP2-processors one could imagine that these are able to process much more language based meaning than human processors of the year 2005. As long as they are processing the different subdomains within the generally known meaning space it seems conceivable that these processors can be of some help, because they than could collect, integrate, abstract, explore, etc. which could be beyond the capacity of human processors. And because they are operating within the actual global meaning space it should be possible for human processors somehow to understand the output of the artificial processors. But what would happen when these AP2-processors would extend the meaning space on account of their bigger capacities in possible combinatorial meaning-sub-spaces which have not yet been processed by human processors? In this case --it seems-- can the advantage of the AP2-processors again be weekend or be in vain because then the phenomenon of complexity breakdown will return. If mankind will not --as in bad science iction plots-- loose control to the AP2-processors, the human processors themselves have to be developed much further than known today. Because the todays available processing capabilities depend from the actual active genetic program the human processors have to develop their own genetic program to a point where they can extend their processing capabilities to cope with the AP2-processors also during possible extensions of the meaning space.


From this follows that the avoidance of the complexity breakdown is only possible as long as the processing capability of human and artificial processors happens in a coupled fashion whith nearly equal meaning spaces. This implis that the development of the human processor has to be pushed much higher than in the past. The opinion that the future adaptation of biological structures can happen without substantial changes in the genetic program of human processors as such seems to be a dead end; the idea to "keep human life" alive without changing the genetic program will be the fastest way to destroy it.


With these farreaching explorations of the possible management of meaning spaces representing the future knowledge spaces I will close this section. I hope it became clear that computational semiotics has to play a major role within computational intelligence as such and computational intelligence has in turn to be framed by the concept of a planetary adaption concept.



8. Epilog

As first reactions of audiences to the above ideas showed can this concept of intelligence and science have a major impact on many of our usual pictures of the world, of mankind and of several kinds of ethical norms. Naturally these questions have to be discussed in length and in detail. But the author saw his task in writing this essay primarily in the exploration of the challenge; after the clarification of this challenge one should examine all the consequences and the possible roadmaps in dealing with this challenge.



9. References


P.B.Andersen, A Theory of Computer Semiotics. Semiotic Approaches to construction and assessment of computer systems, Cambridge University Press, 1999


Miachel A.Arbib (ed), The Handbook of Brain Theory and Neural Networks, Cambridge (MA): Bradford Book, 2003 (2nd ed.)


Wolfgang Balzer [1982], "Empirische Theorien: Modelle, Strukturen, Beispiele", Wiesbaden (Germany):Friedr.Vieweg & Sohn


Wolfgang Balzer, C.Ulises Moulines, Joseph D.Sneed [1987], "An Architectonic for Science", Dordrecht (NL):D.Reidel Publishing Company


Istvan S.Batori, Winfried Lenders, Wolfgang Puschke (eds) [1989], "Computational Linguistics. An International Handbook on Computer Oriented Language Research and Applications", Berlin: Walter de Gruyter & Co.


Ludy T.Benjaman jr., (ed), A History of Psychology. Original Sources and Contemporary Research, McGraw-Hill Book Company, 1988


Paul Boussiac (ed), Encyclopedia of semiotics, New York - Oxford: Oxford University Press, 1998


Gordon H.Bower, Ernest R.Hilgard [1981, 5th ed.], Theories of Learning, Prentice-Hall Inc.


Peter J.Bowler [1989 rev.ed.], Evolution – The History of an Idea, Berkeley, University of California Press


Barry B.BREY, The Intel Microprocessors. Architecture, Programming, and Interface, Prentice Hall/Pearson Education International, 2003 6th ed.


Herbert A.Bruderer (ed) [1982], "Automatische Sprachübersetzung", Darmstadt: Wissenschaftliche Buchgesellschaft


Jean-Pierre Changeux, Der neuronale Mensch. Wie die Seele funktioniert - die Entdeckung der neuen Gehirnforschung, Reinbeck bei Hamburg, Rowohlt Verlag, 1984 (translated from the french edition 1983)


Noam Chomsky [1959], "A review of Skinner's Verbal Behaviour", in: Language, 35, 26-58


[20] Francis Cottet/ Joelle Delacroix/ Claude Kaiser/ Zoubir Mammeri, Scheduling in Real-Time Systems, Chichester (Engl.): John Wiley & Sons,

Holk Cruse, Jeffrey Dean, Helge Ritter [1998], Die Entdeckung der Intelligenz – oder Können Ameisen denken?, München, Verlag C.H.Beck


M.Davis (ed): The Undecidable. Basic Papers On Undecidable Propositions, Unsolvable Problems And Computable Functions, Hewlett (NY): Raven Press, 1965 pp. 34–39, Jan. 1959.


Randall DAVIS/ Douglas B.LENAT [1982], "Knowledge-Based Systems in Artificial Intelligence", New York et al: McGraw-Hill


D.Diaper, N.Stanton (eds), The Handbook for Task Analysis for Human-Computer Interaction, Lawrence Erlbaum, 2003


A.Dix, J.E.Finlay, G.D.Abowd, R.Beale, Human-Computer Interaction, Pearson Education, 2003 3rd ed.


Gerd Döben-Henisch, "Semiotic Machines - An Introduction", in: Ernst W.Hess-Lüttich/ Jürgen E.Müller (eds), Signs and Space - Zeichen und Raum.,Tübingen: Gunter Narr Verlag, 1998, pp.313-327


Gerd Döben-Henisch, "Turing, the Turing Machine, and the Concept of Sign", in: W.Schmitz, Th.A.Sebeok (eds.), Das Europäische Erbe der Semiotik; The European Heritage of Semiotics, THELEM: W.E.B. UNIVERSITÄTSVERLAG , to appear June 2004


Irenäus Eibl-Eibesfeldt [1980, 6th rev.ed.], Grundriss der vergleichenden Verhaltensforschung, München, Piper & Co Verlag


W.K.Estes (ed) [1975], "Handbook of Learning and Cognitive Processes. Vol.1-2

W.K.Estes (ed) [1976], "Handbook of Learning and Cognitive Processes. Vol.3-4


Udo L.Figge (1994), "Semiotic principles and systems: Biological foundations of semiotics", in: Winfried Nöth (1994), pp.25-36


Robin Gandy [1988], The Confluence of Ideas in 1936", in: Rolf Herken (ed) [1988], "The Universal Turing Machine. A Half-Century Survey", Hamburg-Berlin: Kammerer & Unverzagt, pp.55-111


Kurt Gödel: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. In: Mh.Math.Phys., vol.38(1931),pp:175-198


Kurt Gödel, "Remarks before the princeton bicentennial conference on problems in mathematics", 1946. In: Martin Davis, 1965: pp.84-87


Stephen Grossberg [1982], "Studies of Mind and Brain. Neural Priciples of Learning, Perception, Development, Cognition and Motor-Cntrol", Dordrecht (Hollland): D.Reidel Publishin Company


Marc D.Hauser [1996], The Evolution of Communication, MIT Press


Andreas M.Heinecke, Mensch-Computer-Interaktion, Fachbuchverlag Leipzig, 2004


Rolf Herken (ed) [1988], "The Universal Turing Machine. A Half-Century Survey", Hamburg-Berlin: Kammerer & Unverzagt


Andrew Hodges [1983, 2nd ed. 1994], "Alan Turing, Enigma", Wien-New York: Springer Verlag


A.Hodgkin/ A.HUXLEY, “A quantitative description of membrane current and its application and its application to conduction and excitation in nerve”, in: J.Physiol. (London) 117, 500-544, 1952


C.L.Hull [1943], Principles of Behaviour, New York: Appleton-Century-Crofts


Klaus Immelmann et al (eds)[1982], Verhaltensentwicklung bei Mensch und Tier, Berlin, Verlag Paul Parey


Immanuel Kant, Kritik der reinen Vernunft, Bd.505 Philos.Bibliothek, Hamburg: Felix Meiner Verlag, (1781/1787, 1956)


George J.Klir , Facets of Systems Science, New York - London: Plenum Press, 1991


William Kneale/ Martha Kneale, The Development of Logic, Oxford: Clarendon Press, 1986


Thomas Kron (ed) [2002], Luhmann modelliert. Sozionische Ansätze zur Simulation von Kommunikationssystemen, Opladen: Leske + Budrich

Lorenz Krüger (ed)(1978), "Thomas S.Kuhn. Die Entstehung des Neuen. Studien zur Struktur der Wissenschaftsgeschichte", Frankfurt am Main (Germany): Suhrkamp Verlag


Bernd-Olaf Küppers (1990, 2nd ed.), "Der ursprung biologischer Information. Zur Naturphilosophie der Lebensentstehung", München-Zürich: Piper Verlag


Thomas S.Kuhn (1957), "The Copernican Revolution.Planetary Astronomy in the Development of Western Thought", Harvard University Press


Thomas S.Kuhn (1962), "The Structure of Scientific Revolution", University of Chicago Press


Soren Lauesen, "User Interface Design. A software Engineering Perspective",London et al:Pearson - Addison Wesley, 2005


Konrad Lorenz [1965], Evolution and Modification of behaviour, Chicago: University Press


Konrad Lorenz, Die Rückseite des Spiegels. Versuch einer Naturgeschichte menschlichen Erkennens, München: Pieper, 1983


Niklas Luhmann [2000], Organisation und Entscheidung, Opladen: Westdeutscher

Niklas Luhmann [1997], Die Gesellschaft der Gesellschaft, Frankfurt am Main: Suhrkamp

Niklas Luhmann [1996], Soziale Systeme. Grundriss einer allgemeinen Theorie, Frankfurt am Main: Suhrkamp

Niklas Luhmann [1994], Die Wissenschaft der Gesellschaft, Frankfurt am Main: Suhrkamp

Melvin H.Marx/ W.A.Cronan-Hillix, Systems and Theories in Psychology, McGraw-Hill Book Company, 4th ed., 1987


Ernst Mayr [1988], Eine neue Philosophie der Biologie (engl. Toward a New Philosophy of Biology), Wissenschaftl. Buchgesellschaft, Darmstadt


Donella H.Meadows, Dennis L.Meadows, JØrgen Randers [1992], Beyonf the Limits, Vermont: Chelsa Green Publ.Co.

Donella H.Meadows et al. (eds) [1972], The Limits to Growth. A REport to the Club of Rome's Project on the Predicament of Mankind, New York: Universe Books


Jürgen Mittelstrass (ed), Enzyklopädie Philosophie und Wissenschaftstheorie, Vol.1-4,Stuttgart – Weimar: Publisher J.B.Metzler, 1995-1996


Charles W. Morris, Writings on the General Theory of Signs. The Hague - Paris: Mouton Publ. , 1971


Charles W. MORRIS: Symbolik und Realität,1925. German translation of the unpublished dissertation of Morris by Achim ESCHBACH. Frankfurt: Suhrkamp Verlag, first published 1981


Charles W. MORRIS: Die Einheit der Wissenschaft, 1936. German translation of an unpublished lecture by Achim ESCHBACH, In: MORRIS 1981, Suhrkamp Verlag, Frankfurt, pp.323-341

Charles W. MORRIS: Logical Positivism, Pragmatism, and Scientific Empiricism. Paris: Hermann et Cie. 1937

Charles W. MORRIS: Foundations of the Theory of Signs, 1938. Chicago: University of Chicago Press (repr. In: MORRIS 1971)

Charles W. MORRIS: Signs, Language and Behavior. New York: Prentice-Hall Inc. 1946

Charles W. MORRIS: Philosophie als symbolische Synthesis von Überzeugungen, 1947. German transl. of an article in: BRYSON, Jerome et al (eds.) "Approaches to Group Understanding", New York. The german transl. is published in: Charles W. MORRIS [1981]

Charles W. MORRIS: The Open Self. New York: Prentice-Hall Inc.1948

Charles W. MORRIS: Die Wissenschaft vom Menschen und die Einheitswissenschaft. German transl. of an article in: Proceedings of the American Academy of Arts and Sciences, vol.80(1951) pp.37-44. The german transl. is published in: Charles W. MORRIS [1981]

Charles W. MORRIS: Towards a Unified Theory of Human Behavior. New York: Prentice Hall Inc.1956

Charles W. MORRIS: Signification and Significance. A Study of the Relations of Signsc and Values. Cambridge (MA): MIT Press 1964

Charles W. MORRIS: Writings on the General Theory of Signs. The Hague - Paris: Mouton Publ. 1971, pp.148-170


Nils J.NILSSON [1998], "Artificial Intelligence: A New Synthesis", San Francisco (CA): Morgan Kaufmann Publ.


Nils J.NILSSON [1982], "Principles of Artificial Intelligence", Berlin et al: Springer

Winfried Nöth, Handbuch der Semiotik, Stuttgart - Weimar: Verlag J.B.Metzler, 2000, 2nd.rev.edition


Winfried Nöth (ed), (1994), "The Origins of Semiosis. Sign Evolution in Nature and Culture", Berlin - New York: Mouton de Gruyter


Winfried Nöth (1985), "Handbuch der Semiotik", Stuttgart - Weimar: Verlag J.B.Metzler


OMG, [2003], "MDA Guide Version 1.0.1", http:www.omg.org


Andreas Paul [1998], Von Affen und Menschen – Verhaltensbiologie der Primaten, Darmstadt, Wissenschaftliche Buchgesellschaft


Charles S.Peirce, "Lecture on Kant", March-April 1865, in: Max H.Firsch et al. (eds), Writings of charles S.Peirce. A Chronological Edition, Vol.I, Bloomington: Indiana University Press, 1982, pp.240-256


Rolf Pfeifer, Christian Scheier (eds) [1999], "Understanding Intelligence", Cambridge (MA): The MIT Press


Friedemann Pulvermüller (1996), "Neurobiologie der Sprache", Düsseldorf et.al: Pabst Science Publishers

Elaine Rich [1983], "Artificial Intelligence", New York: McGraw-Hill


Gerhard Roth, "Das Gehirn und seine Wirklichkeit. Kognitive Neurobiologie und ihre philosophischen Grenzen", Frankfurt am Main, suhrkamp Verlag, 2nd ed., 1995


Alexander Rosenberg [1985], The Structure of Biological Science, Cambridge, Cambridge University Press


Peter Rothe [2000], Erdgeschichte – Spurensuche im Gestein, Darmstadt, Wissenschaftliche Buchgesellschaft


David E.RUMELHART/ James L.McCLELLAND et al. [1986] "Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Vol.1: Foundations"; The MIT Press


David E.RUMELHART/ James L.McCLELLAND et al. [1986] "Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Vol.2: Psychological and Biological Models"; The MIT Press

Marco Schmitt [2002], "Ist Luhmann in der Unified Modeling Language darstellbr? Soziologishe Beobachtungen eines informatorischen Kommunikationsmediums", in: Thomas Kron (ed), Luhmann modelliert. Sozionische Ansätze zur simulation von Kommunikationssystemen, Opladen: Leske + Budrich, pp.27-53

Helmut Schnelle (1994), "Language and Brain", in: Winfried Nöth (1994), pp.339-363

Jörg Siekmann, Graham Wrightson (eds)[1983] "Automation of Reasoning. Vol.1. Classical Papers on Computational Logic 1957-1966", Berlin: Springer-Verlag



Jörg Siekmann, Graham Wrightson (eds)[1983], "Automation of Reasoning. Vol.2. Classical Papers on Computational Logic 1967-1970", Berlin: Springer-Verlag


Rolf Siewing (ed)[1978], Evolution – Bedingungen, Resultate, Konsequenzen, Stuttgart, Gustav Fischer Verlag


Frederick Suppe (ed) [1979, 2nd. ed], "The Structure of Scientific Theories", Urbana: Uinversity of Illinois Press


A.M.Turing,:" On Computable Numbers with an Application to the Entscheidungsproblem", in: Proc. London Math. Soc., Ser.2, vol.42(1936), pp.230-265; received May 25, 1936; Appendix added August 28; read November 12, 1936; corr. Ibid. vol.43(1937), pp.544-546. Turing's paper appeared in Part 2 of vol.42 which was issued in December 1936 (Reprint in M.DAVIS 1965, pp.116-151; corr. ibid. pp.151-154).


Peter D.Ward, Donald Brownlee [2000], "Rare Earth. Why Complex Life is Uncommon in the Universe", New York: springer-Verlag New York, Inc.


Michael Weingarten [1993], Organismen – Objekte oder Subjekte der Evolution, Darmstadt, Wissenschaftliche Buchgesellschaft


Patrick Henry Winston, Sarah Alexandra Shellard (Eds) [1990], "Artificial Intelligence at MIT Expanding Frontiers, Vol.1+2", Cambridge (MA) - London: MIT Press


Ludwig Wittgenstein [1922] , "Tractatus Logico-Philosophicus"


Edward Nash Yourdon (ed)[1979], "Classics in Software Engineering", New York: Yourdon Press


www.wikipedia.org, History of Computing Hardware


ACS – Doeben-Henisch - Reconstructing Human Intelligence 38//39