URBAN FICTION 2004
Reinforcing the Global Heartbeat -
Introducing the PlanetEarthSimulator Project
Prof.Dr. Gerd Doeben-Henisch
University of Applied Sciences Frankfurt am Main
Department of Computer Science and Engineering
INM Institute for New Media Frankfurt am Main
Abstract: The article starts with some observations regarding conflicts in our daily life, presents then a working hypothesis about complexity as a negative force, and communicates then a strategy which could be of some help to overcome the identified problem
Keywords: Environmental complexity, communication, mental facts, meaning, planet earth simulator, learning, computational anthropology/ intelligence/ semiotics
1. Disfunctional Heads? Limited Time
If you are talking these days with people from different companies, organisations, and institutions you can hear very often stories which sound stupendously similar: the heads¯ --those, who have to take the final decisions-- and the experts --those who have the experiences and have to do the real work-- are in a deep disagreement. The experts are engaging themselves for the sake of the company, they are presenting papers, ask for decisions which have to be taken, but the heads don't answer. They seem to prefer to be absent; they have weird excuses or attack the experts with proposals which seem to come from no-where; the relationship to the working reality has still to be defined, at least for those who have to do the real work.
Besides many factors which could be identified in such a situation as being part of the problem, there clearly is a strong temptation for the experts to classify their heads as bad€¯ heads, heads who are personally unable to react to the challenge in the right appropriate way. If these experts would be right then, clearly, we would be living in a dangerous situation. Not only because those things which should have been done wouldn't be done, but much more than this would be the effect for the motivation of all the experts; it's only a question of time when they would be frustrated to a degree which can paralyze a whole organisation.
Although personally disfunctional heads could be disastrous for their organisations, it mustn't necessarily be the personal inability which makes the heads performing so badly. There is at least one other factor which can induce these disfunctionalities. This other factor is complexity.
The term complexity is used by computer science people since decades as a measure to classify a problem to be tractable or not in realistic time (cf. [GAREY 1979]). This tractability-case is caused by the fact that real computers have real limits, mostly showing up in terms of processing time.
Measuring complexity based on the time needed does usually not take into account the internal structure of the problem which has to be processed. But many arguments suggest that the increase in processing time is directly related to the structural complexity of a problem. The structure of a problem is induced by the mixing of sets, relations, and dynamic functions. Structural complexity thus depends on properties like being linear or non-linear and 1-dimensional or n-dimensional. But with the complexity-measure of computer science one can classify complexity without really knowing the details of the structure which is causing the complexity.
Thus one can use the time T a computer needs to compute a problem P as an aid to classify the problem according to the time needed to solve the problem.
Human persons seem at a first glance to be different from today known computers but as neurobiology tells us more and more, are those biological substrates which are responsible for the communication of the cells in the human body¯ concrete systems using discrete units of communications (hormones, electrical pulses etc.)(cf. [SHEPHERD 1994]). From this it follows immediately that information processing in the human body has clear upper and lower boundaries regarding needed processing time and the possible amount of through-puts. Numerous psychological and neuro-biological experiments demonstrate this by showing finite limits of human perception and cognition (cf. e.g. [GERNSBACHER 1994], [HILGARD 1979], [KLIX 1980], [SCHIFF 1980]).
Combining this view of the concrete limited performance of the human biological system with the concept of computational complexity one can imagine, that the permanent interaction of a human system with it's environment can produce minimal to large problems depending on the complexity of the environment. And it is a simple thought experiment to consider a situation where the degree of complexity in the daily environment of humans has reached a point where the individual capacity to deal with such a complexity is fairly surpassed (cf. figure 1).
Figure 1: Complexity surpassing human limits
2. Complexity Breakdown?
If a population of human systems would reach such a situation where the complexity of the environment for most of the members of the population is higher than the individual capacity then this situation could seriously endanger the whole population. As bigger the environmental complexity would be as bigger would become the rate of errors, failures, disfunctional behavior and --as one of many other effects-- the emotional disturbances which can people severely lead into depression or even bodily illness. This breakdown caused by environmental complexity is what I will call complexity breakdown.
It is an interesting empirical question how many countries are already in such a situation of a complexity breakdown. There are many phenomena to report which suggest that we are at least in a transitional phase where the real complexity causes the real limits of everybody to become more public than before.
The heads are here in a bad position: their roles are usually defined and surrounded by strong expectations that they will bring success to the organisation. Failure is not allowed. Only very strong persons can handle such situations, but even those have ā€“in case of a complexity breakdown-- no real chance! Their strongness can in contrary make things eventually more worse...
If the processing of complexity is the real heartbeat of human societies then is it the complexity breakdown which severely can change the heartbeat from synchrony to asynchrony.
In that what follows I am presupposing that human mankind is indeed at least in a transitional phase leading to a complexity breakdown. And in search for possible solutions to this problem I will analyze the problem a bit more in the light of the concepts communication and learning.
3. The Heartbeat of Mind
If we try to understand the nature of the problem of a complexity breakdown a bit more we will inevitably be lead to the phenomenon of communication.
Without communications are human systems purely black boxes, completely isolated systems. Communication is either directly, peer-to-peer, or is mediated by some technical device or institutional process; in this latter case we can speak of a medium which takes as input the communicative output of one system and has as output the message as it has been processed by the medium.
Usually the communicative acts are not given by the occurrences of some communicative events as such. It is e.g. not the sound as such which constitutes a communicative act in the acoustic sphere, but the way how a hearing system interprets this sound. And this interpretation of a perceiving system has to be in synchrony with the way how the speaking system intends to be interpreted.
Thus communication is connecting otherwise separated individual meaning spaces allowing them to interact (cf. figure 2).
Figure 2: Communication interrelates meaning spaces
The individual meaning space is related to the consciousness of a person, to it's self-awareness; the content of a consciousness is called by philosophers phenomena or mental states. This includes in a philosophical technical sense also emotions as well as any kind of something which can be thought of.
The content of this individual meaning space is dynamically changing. It is never completely empty or fixed,and the degree of variety is partly depending from the activity of the system and this activity is called learning. Within learning a human person can e.g. associate, transform, and combine given mental states into newly arranged states.
In this view does learning presuppose communication and vice versa is communication only possible if the sharing systems have run through similar learning processes. Thus, this reciprocal nature of communication and learning constitutes the heartbeat of human culture. But communication as a concrete process is limited to a maximal number of items per time unit. And this holds for learning too!
Thus we have for every living individual a certain shape of maximal possible communicative and learning acts which can not be surpassed without changing either the biological structure as such or by supporting the individual system by an artificial system helping to absorb complexity (cf. figure 3).
Figure 3: Maximal possible number of communicative and learning acts of an individual
It would be interesting to know whether there is such a thing as the normal rate of the heartbeat of human culture which would determine the theoretical maximal possible pace of change; the real pace should usually be far below this theoretical maximum.
4. Mind and Brain
For some persons it can perhaps seem to be a bit outdated when I am using here the term phenomenon or phenomena in connection with mental states and consciousness; in the era of neurobiology and brain research people tend to think that these old mentalistic terms are of no usage any more. This is a dangerous position.
I myself have developed --and probably will develop more-- mathematical models to interpret neuro-biological data and I am really enthusiastic about the new horizon of understanding enabled by modern brain research. But one has to keep things clear: studying the biological/ chemical/ physicalistic machinery of the brain does not replace the function as such!
If you want to be a good mathematician then it would be of no great help to know, how the neurons of your brain are working. You have to use your brain as it is and by this usage you have to generate the necessary mental facts in your brain such that you can behave like a mathematician. Thus although we have today brain science associated with much hope for the coming future we will surely for many more years be bound to our consciousness as self-awareness embedded in a pre-programmed body. And it will be these learned and communicated mental states which will to a high degree determine our behavior (cf. figure 4).
Figure 4: Body, brain, and mental facts of the consiousness
There is another point why a philosophical and/or psychological analysis of phenomenal experience is still important today. This is an argument by theory of science (in German Wissenschaftstheorie)(cf. e.g. BALZER 1982, 1987).
To understand the functionality of neuronal structures it is necessary to correlate the neurological data with at least behavioral data (mathematically a mapping); but because behavioral data have human-relevant meaning only in the light of the accompanying self-awareness of some human person, it follows from this that a human-relevant interpretation of neuronal processes can only be done through a close cooperation of brain research and a philosophically based phenomenology (or intuitionistic psychology). Inversely it would be of great help to support phenomenological analysis by neuronal findings. That today these two disciplines are rather opposing each other than cooperating is one of those stories showing that science is not only science.
The considerations so far can give us some first hints in which directions solutions for an upcoming complexity breakdown could possibly be found.
There are at least two main strategies: (i) changing the internal structure of our body and brain to expand the today given limits an (ii) complement the human body (and brain) by assistive external structures which do the job of digesting complexity to a degree which makes this complexity feasible within the actual limits of the human body. The third strategy could be a combination of (i) and (ii).
Strategy (i) is the subject of disciplines like neurobiology, biology, genetics and the like. Strategy (ii) has no official promoters today. Candidates to do this could be computer science (computational intelligence, computational semiotics, Human-Computer Interaction etc. (cf. e.g. [DOEBEN-HENISCH 2002,2004], [STEELS 1997], [DIX 2004] ), cognitive psychology, computational anthropology etc.
In what follows I will primarily deal with strategy (ii). Strategy (i) will be a natural consequence of (ii).
5. Interfacing the Human Mind
Following strategy (ii) we have to investigate human systems accompanied by technical systems which are designed to down-scale the environmental complexity such that human system are able to consume the output of these assistive systems within their given cognitive limits (cf. figure 5).
Figure 5: Human and technical system sharing the same environment
Besides all the specific properties of the typical human communication channels there is one concept which has to be mastered if these assistive technical systems should really be able to serve human systems in reducing the cognitive complexity of given environments. This key concept is the concept of meaning as part of communication. The problem with the concept of meaning is sketched in the following figure (cf. figure 6). The
Figure 6: The structure of meaning in humanlike communication
basic structure of meaning consists in the fact, that the meaning of a sign f2 is part of a signifying/ referential relationship between the sign f2 and that something f1 which through this relationship is connected to the sign. The something f1 can be any kind of mental state or combinations of mental states. This can be perceptions of something of the perceivable environment or memorized structures or emotions, pains etc.
The important point here is the fact that this relationship which constitutes the something f1 as meaning for some sign f2 is completely immaterial; from the point of view of self-awareness is this meaning-constituting relationship pure knowledge, only existing as a known relationship within the consciousness of the knowing system! And these possible meanings within signifying relations are primary or radical facts in the sense that they are not representable by something else. Only on the basis of such meaningful signs like f2 can one translate signs into other signs. But the sign-constituting relationship as such can not be substituted by any other artificial construct.
Technical systems which should be able to communicate with human systems in a human-like manner have to be able to mimic this kind of meaning-constituting structure. How can this be done?
Already in 1948 has Alan Matthew Turing, the father of modern computer, in a paper (Intelligent Machinery ) suggested, that computers have to undergo learning processes similar to human persons, if they should be able to become human-like partners for human systems. They should not have all the knowledge inside their structures preprogrammed from the beginning; instead they should start with some learning ability which then could perceive and learn the environment as it is.
Turing did this idea not analyze more deeply. But in the light of our today knowledge seems he to be right with regard to the direction of a possible solution. The nature of the meaning-constituting relationship at the root of human meaning spaces cannot simply be copied from one system to another. With the knowledge which we have today we can only try to built a technical system which is able to mimic that mechanism which in the human system enables the before mentioned meaning structure.
Similar to children which are learning how to name perceivable facets of their environment by experiencing the behavior of their parents one must enable technical systems to do the same thing: activating meaning relations between perceivable aspects of the environment and timely associated sounds or other sign-matters.
And because those mechanisms in the human system which are responsible for the constitution of meaning are concrete ones, embodied in the brain and all those systems which enable the brain to work €“more or less the whole body!--, one has to build technical systems which mimic the behavior of these concrete structures, the structures of the brain and the structures of the surrounding body. In this sense implies strategy (ii) strategy (i). A combination of strategy (i) and (ii) seems therefore to be inevitable.
Perhaps there will be some day in the future where human systems have reached a state of knowledge where they have understood the functions embodied in the brain to a degree that they can map this knowledge into formal models which allow some translation into other kinds of implementations than those which are used in the known biological systems today; then it could be that intelligent systems could be built which have a different designs than today. But by several reasons is this very unlikely and the time when this could happen is probably very far away.
6. Learning to Learn
Stating that machines should learn --like Turing did-- is one thing; to explain in detail what this means, another. The literature related to the topic learning is immense, in the discipline of psychology as well as in the discipline of machine learning (cf. e.g. [BOWER 1981], [HUTCHINSON 1994], [LANGLEY 1996], [MACINTOSH 1994], [SUTTON 1998]). Here is not the place to discuss these topics extensively. But the lessons to be learned from all these discussions until now is that we still have to learn what it means to learn. In such a situation it's not wise to design the final system which embraces everything. It seems to be more realistic to design an experimental process which allows the learning of the learning by a continuous process of model building, experiment, and refinement of the models. Furthermore should such a process be as public as possible; the existence of every human system is related to this topic and therefore it wouldn't be wise to exclude something which is inherently related to the subject itself.
Another well known dichotomy is the distinction between technical systems in the real world --robots-- and those which exist only in software --software agents, bots--. Both kinds of systems can be seen as different instances of the same underlying structure. Both versions should be used in experiments. In this context one has also to mention the distinction between real living spaces and virtual spaces. Here too one should combine both possibilities to investigate a much more common structure.
All these considerations together are forming the background for the project called PlanetEarthSimulator.
7. The PlanetEarthSimulator Project
The PlanetEarthSimulator-Project (:= PES-Project) is designed in a way to meet all the above mentioned requirements (cf. figure 7):
Figure 7: Overview of the PlanetEarthSimulator-Project
The PES-Project assumes human systems assisted by technical systems. The technical systems are programs located somewhere on servers which can be connected to the real-world via sensors and actuators. The classical robot is within the PES-Project a certain kind of extension; by communication is the robot linked to the main programs and therefore is it a part of the whole system.
The technical systems are mainly organised either as worlds or either as agents. A world is a simulation of some processes which can be models of real world processes. Disconnected from the real-world they act as virtual worlds. An agent is seen as a cognitive agent intented to approach human intelligent behavior as close as possible. The main target is to enable these agents to learn and communicate human-like. The cognitive agents are living exclusively within the worlds. They can communicate with the real world only insofar the worlds are connected to the real world.
The plans of the worlds and cognitive agents are given by so called LModels (:= Logical Models); these are formal descriptions of structures and processes which are assumed to represent world processes and cognitive structures as seen by everyday experience and by science. The primary and main authors of those LModels are human systems. But the cognitive agents as well as some evolutionary mechanisms can be allowed to change these models controlled by some criteria of improvement. In this sense its true that the technical system will support the improvement of themselves by themselves controlled by human systems.
The technical systems will probably only be successful systems insofar the human systems are sufficiently capable to learn what it means to learn.
The PES-Project is organised as an open-source project not only with regard to the software used in the project but also with regard to the documents and the knowledge used within the project. The heart of the project is given by several working groups whose communication is the heartbeat of the whole process. Ideally should it be possible that every human system on this planet could at every time interact with the PES-project, could use the collected knowledge or could contribute it's own knowledge and experience. In practice this will probably for a long time be a vision which will be more or less implemented.
The new Master program Intelligent Systems and Sustained Life of the University of Applied Sciences (Frankfurt am Main, Germany) will accompany this project, but it is an open project; everybody can participate.
8. Measuring the Success
Because the main motivation behind the PES-Project is a problem labeled Complexity Breakdown it is important that --as part of he project-- models of measurement of environmental complexity as well as models of the measurement of the capacities of systems to handle complexity will be developed and applied. Without such measurement models it will not be possible to judge any system. Therefore the development of a discipline like computational anthropology accompanied by computational intelligence (cf. e.g. [PERLOVSKY 2004]) in general and computational semiotics especially could be the main methodological framework of all the logical models.
W. BALZER  "Empirische Theorien. Modelle, Strukturen, Beispiele". Braunschweig - Wiesbaden, Fr.Viehweg & Sohn
W.BALZER/ C.U.MOULINES/ J.D.SNEED  "An Architectonic for Science. The Structuralist Program", Dordrecht: Reidel
G.H.BOWER/ E.R.HILGARD  "Theories of learning", Englewood Cliffs, NJ: Prentice-Hall,
A.DIX et al [2004, 3rd ed.] "Human-Computer Interaction", Pearson - Prentice Hall, Lond - New York et al.
G.DOEBEN-HENISCH , "The Planet Earth Simulator Project - A Case Study in Computational Semiotics", to appear in: IEEE Africon 2004 Conference, Sept. 15-17 2004, Gabarone (Botswana)
G.DOEBEN-HENISCH/ L.ERASMUS/ J.HASEBROOK  "Knowledge Robots for Knowledge Workers: Self-Learning Agents connecting Information and Skills", in: "Intelligent Agents and Their Applications" (Studies in Fuzziness and Soft Computing, Vol. 98), L. C. Jain, Zhengxin Chen, Nikhil Ichalkaranje (eds.), Springer, New York, 2002, pp.59-79
Michael R.GAREY/ David S.Johnson , "Computers and Intractability. A Guide to the Theory of NP-Completeness", W.H.Freeman and Company, San Francisco.
Morton Ann GERNSBACHER (ed),"Handbook of Psycholinguistics", Academic Press, San Diego et al.
E.R.HILGARD/ R.L.ATKINSON/ .C.ATKINSON [1979, 7th ed.] "Introduction to Psychology", Harcourt Brace Jovanovich, Inc., NewYork et al.
Alan HUTCHINSON , "Algorithmic Learning", Clarendon Press, Oxford
F.KLIX  "Information und Verhalten. Kybernetische Aspekte der organismischen Informationsverarbeitung", VEB Deutscher Verlag der Wissenschaften, Berlin
Pat LANGLEY , "Elements of Machine Learning", Morgan Kaufmann, San Francisco
N.J.MACINTOSH (ed),"Animal Learning and Cognition", Academic Press, San Diego et al.
L.I.PERLOVSKY  "Integrating Language and Cognition", in: IEEE Connections, Vol.2 May-2 , pp.8-13
W.SCHIFF , "Perception: An Applied Approach", Houghton Mifflin Company, Boston et al.
G.M.SHEPHERD [1994, 3.rv.ed.], "Neurobiology", Oxford University Press, New York - Oxford
B.F.SKINNER , "Science and Human Behavior", The Free Press, New York et.al.
L.STEELS  "Language Learning and Language Contact", in: W. DAELEMANS/ A. VAN DEN BOSCH/ A. WEIJTERS (eds.) "Proceedings of the workshop on Empirical Approaches to Language Aquisition", pp. 11-24, Prague.
Richard S.SUTTON/ Andres G.BARTO , "Reinfordement Learning. An Introduction", The MIT Press, Cambridge
A.M.TURING  ā€˛Intelligent Machinery", in: B.Meltzer/ D.Mitchie (eds.)  ā€˛Machine Intelligence 5", Edinburgh