Engineering of Systems

Figure 2.7: The Ingredients of an Engineering Process for Learning Systems
\includegraphics[width=3.5in]{RoadmapToIntelligence.eps}

Prepared with all these considerations we have to define finally a process, a 'road-map' how to organize the development of assistive technical systems which can communicate with human persons at least as humans can. The condition 'at least as humans' sounds perhaps strange, but the logic behind this expression assumes, that these new kinds of assisting systems shall be able to communicate as humans communicate. If these technical systems will be able to do this, they will - highly probable - be able to do much more than human persons can do. This follows from the fact that the actual biological structure of human persons is not 'scalable', while a technical system of that artificial kind which will be able to communicate will be. Gen-technology could in principle change the bodily conditions of human persons, but until now in many societies there are laws inhibiting the further development of humans. These laws are backed up by so called 'ethical rules'. From a philosophical point of view these 'ethical rules' used today are highly questionable. A discussion could reveal these deficiencies.

The upper part of diagram 2.7 shows a simplified model of a systems engineering process, which will be assumed here as the main methodological framework how to transfer a problem P into some real working system SYS2.8. The problem P is introduced by some stakeholders and will then be processed by some team of experts, which are coupled to the stakeholders by communication. Communication can enable some flow of knowledge, but has also several constraints which impose typical deficiencies of the intended content. The expert team is usually further connected to a community of experts by verbal communication, by training, by written documentations, by cooperations, by tools etc. The experts have to identify the problem, have to analyze it's implications as concrete requirements resulting in some model of intended behavior $ M_{sr}$2.9 including an intended system interface. Referring to this model of intended behavior $ M_{sr}$ they have to find a solution first as a logical (conceptual) Model (Theory) $ M_{L}$2.10, which then can be 'translated' into a real working system $ SYS$. For the quality of the resulting system $ SYS$ it will be necessary to control the correctness of the logical model $ M_{L}$ during a procedure called verification and to control the correctness of the real system one has to organize a procedure called validation.

In the case of this booklet our problem $ P$ is given by human persons, living in a world of tasks, which show remarkable properties of learning, planning, communication, etc. The intended systems shall be able to interact in a world with human persons. Thus we have to partition the world into an environment part as well as systems part, consisting at least of semiotic systems. An environment embraces at least 'positions' and 'objects' associated with positions. Objects can have further properties, and there are some rules/ laws guiding the dynamics of the environment. Systems are recognized as a special kind of object. The systems will be further subdivided into learner and tutors. It is assumed that one can specify additional 'layers' of rules representing either 'games' or 'social systems'. See below for more details.

Wile the model of intended behavior $ M_{sr}$ as well as the logical model $ M_{L}$ are part of the theory, the implemented model $ SYS$ will be either a software simulation of such a system or an implemented robot-like system (e.g. as an implemented intelligent wheel-chair).

Gerd Doeben-Henisch 2013-01-14