HLI: An Impossible Task: Let us Begin

Begin of Document: March-19, 2006

Last Changes: March-20, 2006
Status of Document: First considerations; draft

First Words

The task of constructing a General Theory of Human Learning & Intelligence (GTHL+I) is at the present time --and probably for a couple of many more years-- strictly speaking impossible. The reasons for this impossibility are manifold.They will inevitably show up in the following pages and chapters. Knowing these problems --and knowing many others not, which are certainly hided within this task-- I nevertheless have decided to start this task. You perhaps will ask: why?

The reasons behind my decision are manifold. A few are rooted in my personal biography, others --and these are the more important ones-- are rooted in science and how science works today.

Personally I am interested in the phenomenon of human learning and intelligence at least since 1970. On account of several reasons (look to my small biography) I could not stay in one paradigm of thinking only; I was challenged to work at least in 8 main paradigms (theology, philosophy, experimental psychology, phonetics, theoretical linguistics, philosophy of science, mathematical logic, computer science, partially in parallel, partially sequentially. And nearly all of these main paradigms can easily broken down in more types of researching and thinking. Additionally I worked with and in teams of even more disciplines of different shape thereby learning somehow there thinking and work pattern.

For the construction of a GTHL+I this is good and bad. It's bad, because in such a process of changing paradigms it is nearly impossible to reach a sophistication which is the 'normal' standard within a single discipline narrowed down to only a small fraction of questions and methods which is typical for many activities in today empirical sciences. But such a diversity has also some advantages: a GTHL+I can not be built based solely on a small fraction of empirical sciences. There is an inevitable necessity of combining all disciplines within one general framework.

From the outside of science people often think that 'the sciences' will do this job automatically, because one presupposes, that 'science as such' is targeting the 'whole truth' and the whole truth is given in the 'unified view' of all the acquired knowledge in all the different disciplines. But this assumption is mostly wrong. Science as such is not interested in the 'whole truth'. The sciences are fragmented in hundreds --or even thousands-- of small areas of research, which mostly are disconnected from each other. Some one may argue that physics is a discipline which is doing some unifying job, but the physical theories are so far away from learning and intelligence, that these theories are only to a very limited extend useful for this task (but in a certain area, as we will see, is physics extremely important!).

From these first remarks one can conclude, that the task of the construction of a GTHL+I can not start with an explicit full fashioned theory. Nevertheless, to be able to start we will need at least a vision of the possible shape of such a theory and an idea of a possible process how to approach it. Ideally we should have --already in the beginning-- an understanding, how learning works. This sounds paradoxical and it is. In formal sciences you have already to know, what you want to prove. But in empirical sciences you have no a priori knowledge about your target when you start. Nevertheless there must somehow exist some implicit possibility that we are able to construct a new knowledge rooted in a situation of not knowing the target yet. If not, then science would be impossible at all. The historical experience so far encourages us to assume that --at least-- the human species has such a 'hidden ability' to start from not-knowing and reach at some time in the future a situation where we 'belief' to 'know more than before'. The human species has collected during the course of time some cultural knowledge how a person or a social group should 'behave', to expand knowledge. But a complete understanding, how this 'really works', is still lacking.

It is the task of the GTHL+I to describe this sufficient well.

Some Words About Science

A complete GTHL+I would explain, why science is possible and how science works. But science is working before we have such a complete GTHL+I. How is this possible?

This paradoxical fact can be a hint that the ability to expand knowledge is to some degree 'built in' in the human species. To a certain degree we can learn before we know, what learning is. There is some analogy to the fact that we are 'living systems' without completely knowing, what 'life' is, how it works, how it came into being (although meanwhile there is a common opinion that we gained in the last 100 years a lot of knowledge to understand this a bit).

Thus 'knowledge' at a certain point of time is always bound to a process where humans interact with their environment and themselves to get some 'picture' of dynamical structures based on their built-in ability to generate those pictures, to gain knowledge. In this view is knowledge always 'intermediary', some state of a dynamic system where the 'coordinates of the whole system' can change. Science itself is a 'process of a process': an ongoing activity of interactions with a process whose nature has to be 'revealed'. A description of the scientific process is therefore -strictly speaking-- although only a kind of approximation.

In a simplified view one can say that science is an endeavor of a subgroup of humans within the human species starting with some preliminary view of a possible domain of investigations.

Figure: Some factors of the scientific process (Simplified!)

The starting knowledge is not the final knowledge, but it is the point of departure. You always have to start with something without explicit knowledge whether this will make sense in the future or not. Your only chance is the following process. The group of researchers --probably including also the engineers-- can only work if they have a minimal kind of communication enabling the minimal sharing of ideas and to coordinate their behavior. Although we have not yet a real theory of communication available we are communicating and through communication we are organizing the process of science. There are also lots of interactions between the social environment of the scientists and the group of scientist. What differentiates science from non-science is the claim that the facts of science can be established by measurement procedures which are sufficiently independent from the internal subjective states of the participating scientists; they must be repeatable by everybody who is following the measurement procedure. This implies that all these measurements are sufficiently well described. The results of the measurements have to be described in some appropriate measurement language (the 'usual' language or some specifically introduced language). Because measured facts are bound to certain isolated places of time and space. They can not reveal by themselves relations, dynamics or laws. These more general aspects have explicitly to be introduced by the researchers in an explicit description called theoretical explanation or theory. The most explicit forms of such explanations are written down as formal theories using languages and formal structures developed by mathematics and formal logic. Thus the 'content' of empirical facts' is not given in the measurements as such but by the explanations given by the researchers in the manner of theories. Without these theories the facts have no 'meaning'. They explain nothing. But, as we know today, the interactions between empirical data and explaining theory as well as the properties of formal theories as such are far from trivial and the source of numerous problems, errors and fallacies. Indeed, there are results from meta mathematics and philosophy of science which argue that a complete and error-free theory process is beyond the scope of human intelligence as it is known today.

Furthermore, the final goal of science is usually not the constructed explanation, not the theory as such, but some kind of predictions: what follows according to the scientific explanation from a known state S of the domain under investigation for a yet unknown state S' in the future? Can we predict some behavior? How sure is this prediction? Can we use this knowledge for some practical applications? Can we improve our Life? Those predictions are possible, if the explanation is given as a formal theory which allows inferences (defined in some kind of logic). Today many kinds of different logics are known. Without a formal theory it is not clear what is meant by 'inference' and whether a certain kind of inference is 'valid'. An inference can be a single statement or it can include sets of sequences of possible outcomes generated by algorithms used for modeling certain portions of the theory. Formally is a model never a theory, but a theory can determine whole sets of models. Predictions can partially be tested by comparing them with reality. If there is a sufficient 'congruence/ similarity' between measured reality and the predictions of a theory then the researchers are inclined to speak of a confirmation of their explanations. Otherwise it is open: is the theory wrong or did we only make bad measurements? If a theory is too general, it will never be changed because it will never become 'really wrong'. Such theories are good for ideologies and power play, but not for science.

The Shape of Intelligent Life

To start a process for the construction of a GTHL+I we start with the currently available assumptions about the historical conditions of human learning and intelligence.

In the overall picture we know that phenomena of learning and intelligence are bound to human persons which have to be understood as members of a population which departed about 3.5 Billion years ago from a biological structure called 'cell'. This cell is already a highly complex structure whose emergence from more simpler structures is still unclear. There was a complex process before the emergence of the cell lasting many billion years and also the interaction of the cell with its environment during the 3.5 billion years is stunning.

Figure: Some bits of the evolutionary conditions of intelligence

The important point here is that one has to understand that the individual as such is nothing! An individual as such is not able to contribute to the phenomenon of life on its own. The 'real thing' is the population which enables the inheritance of information from one generation to the next where these informations can change. And indeed, as we have learned, it is usually not only one population, but a set ob subpopulations or populations in parallel which served as a multi threaded stream of processes where some populations died out because the environment was to dangerous and some others 'survived' because they were 'adapted' to the critical conditions.

From this it follows that the structure of life is not completely fixed, it is dynamic. And between birth and death of one individual member of the human population the process of 'growing' from a cell to a whole organism is during all the time subject to an ongoing interaction between the system and its environment which can modify nearly everything which is 'planned' according to the inherited information. After the birth, which is happening after a dramatic growth and interaction process, this dynamic development continues and will not stop before the final bodily death. We know meanwhile that even the 'elderly, aged' people are able to learn in the full sense; the brain is never losing its plasticity as long as it works! As long as the system is active, the brain will work!

Because the ability to learn, to communicate, of being intelligent, is --strictly speaking-- a property of the 'matter' itself, we have to rethink the nature of matter. There is not one kind of matter without intelligence and one kind of matter with; the matter as such is revealing an inner structure which is the space of all kind of life and intelligence we have got acquainted with so far. Human knowledge is the medium within which matter is 'looking to itself', like a mirror. And more than this. Because humans can act, based on knowledge, we can say that matter mediated by human knowledge can change itself.

Guessing The Whole

From where should we start? Not knowing the whole picture yet we have to start with some preliminary partial knowledge. We have to try to relate the partial projects as far as possible to each other for to miss the intended case not too much. But because we are lacking the explicit knowledge we have to make a guess.

A basic classification of the whole field could be done with regard to the basic kinds of data, which are available.

Figure: Basic Dimensions of Data

If we are interested in the learning of human agents as part of a population within some environment we have generally three options:

1. 3rd-Person view: we are looking from the outside of some agent onto its behavior, which is given as input to (= stimulus, S) and output from (= response, R) a system. Generally this represents behavioral data (= S-R) as a sequence of S-R-pairs where an S can be a whole set of stimuli and an R can as well be a whole set of responses.

2. Within the 3rd-Person view we can distinguish as a certain subset neurophysiological data (= N), which can be gained by observations of the neural system of the agent. Within these data are numerous levels of system abstraction conceivable: whole brain, certain brain areas, much smaller subareas, modules, nuclei, neurons, synapses, membranes etc.

3. There is also a 1st-Person view, the primary source of knowledge for human agents. This represents the phenomenological data (=P), which are as a whole not empirical in the modern sens. But one has to notice the the empirical S-R- and N-data are --from the point of view of knowledge-- a true subset of the phenomenological data! They are different to the other phenomenological data because they are defined by certain constraints which must be fulfilled to be called empirical data.

Today we have the working hypothesis that the phenomenological space is the 'effect' of the brain. The explicit explanation of this hypothesis has not yet been given. The accompanying interesting question is why the whole system of an human agent needs the phenomenological experience as part o the brain. A first guess could be that the consciousness is a kind of interface of the system 'into itself', a kind of macro level for a certain kind of communication and planning. Without the level of the consciousness it would be nearly impossible for the system to do this. But, as stated, this vague idea has to be clarified more concretely.

Traditionally do we have the following distribution of the data to certain disciplines:




Experimental Psychology, Ethology


Brain Sciences


Phenomenology, Psychology, Philosophy, Cultural Sciences




Classical Psychology; everyday knowledge



For a whole picture we should incorporate all data, thus we should have S-N-P-R, although there is the difficulty with the P-data, which are only partially empirical. But because the consciousness seems to be one of the most revolutionary achievements of the brain development one should try to find ways to incorporate the P-data.

Very important is the continuous interaction of the whole system with the environment (including other human agents) as well as the property of growth and development. Not only is the structure of the brain steadily changing from the first few days of being in existence as well do the 'contents' of the brain, the brain states representing several kinds of different informations, change constantly.

Organizing a process

Knowing all this there is the interesting question, how one should start a real process of investigation (including engineering)?

The first postulate is a methodological one: we will claim that one should work in a 'continuous sequence of full scientific cycles', i.e. the main theory T (see figure above 'Some factors... ') , has to be established and improved by a sequence of full scientific cycles which consist of the following steps:

Full Scientific Cycle (FSC)

1. Providing data D (S-R, N, P or correlations of these) guided by some working hypothesis (eventually by extending an already existing set of data) from a certain environment E; usually done by explicit measurement

2. Elaborating a minimal formal theory T (or modifying an existing one)

3. Elaborating at least one feasible formal model M based on T

4. Implementing the model M in a real working system S

5. Providing a test case EXP for the system S in a certain environment E

6. Gaining test-data by measurement as a subset D' of the data set D

7. Evaluating the test data with all the available data and the assumed theory T

8. Confirm theory T or criticize it.

If these cycles are done well, it could be (there is no guarantee!) that this will lead to some useful theory and working models in the future.

It remains the question, with which working hypothesis and with which kind of data we should start.

A negative example is usually experimental psychology where the researchers are producing lots of isolated data by experiments but usually do not work out theories with working models.

A positive example can be found in that kind of developmental emergent robotics, which tries to understand human agents by reconstructing working systems (= robots, humanoids) based on real empirical data of human cognition and learning (as an example see e.g. http://www.er.ams.eng.osaka-u.ac.jp/index-eg.html).

Because the research environment for us is actually restricted to the realm of our intelligent systems master program we have to define some work packages which are feasible under these constraints, enriched by cooperations with other places of research and application. Hence, with what kinds of topics we should start?

There is another positive example within the field of artificial intelligence and robotics which is related to the framework of developmental emergent robotics, but is a bit more restricted in the ongoing tasks, this can be seen here: http://www.isc.cnrs.fr/dom/dommenu.htm

Figure: Human-Machine Interface

For our Master Programm there is one natural option to concentrate on the interface between an intelligent system and the environment, especially between human agents and technical systems S. This interface should function according to the way how the interface between human agents is working. This is related to learning situations, to communications as well as to typical patterns of interactions. The main kinds of perceptions are here:

1. Spoken Language (SR: Speech Recognition)

2. Vision (V)

3. Tactile (Tc)

Besides this we have reactions/ responses (R) from the system which are realized in the following manner:

1. Spoken Language (SS: Speech Synthesis)

2. Generated Images (I)

3. Movements caused by actuators (M)

These topics constitute the framework within which we should try to establish our research projects. And it seems that the work of Peter F.Dominey (Lyon) is for the immediate work a good starting point, which is compatibel wih the long ranging framework of Minoru Asada (Osaka). This does not exclude cooperations with other patterns of research and industrial cooperations.