The process starts with a problem of a stakeholder. Through a communication process, the systems engineer tries to understand the stakeholders vision, desires, and needs and tries to translate his understanding of into a behavior model 2.1 that represents the complete expected behavior of the system to be designed:
As shown in fig.2.2 three main modes of representation can be distinguished: (1) A written text in normal (everyday) language. This is assumed as the default mode. (2) A pictorial mode using pictures, diagrams, figures as 'language' to represent the intended domain (the 'meaning') of interest. (3) A formal model which is capable to be processed by automatic verification processes.
The logic behind this distinction is based on epistemology and semiotics. From semiotics (three of the founders of semiotics are Charles S. Peirce, Ferdinand de Saussure, and Charles Morris) we can learn, that a sign needs at least the element of a sign vehicle, something different which is intended as the object and a meaning relation which is built up in an interpreter (the 'semiotic agent'). While sign vehicle and object can be empirical objects it holds that the meaning relation is fundamentally non-empirical ('subjective'). Therefore in every communication between different semiotic agents they have to assure a sufficient similarity of their used meaning relations. The inevitable difference between the meaning relations of different semiotic agents (like stakeholder and engineers) is often called the 'semantic gap' (cf. Doeben-Henisch/ Wagner (2007)[34]).
From epistemology (citations have to inserted....) we can learn, that our brain is taking the different sensor signals from the outside and the inside of the body constructs an internal world model which is nearly three-dimensional filled up with object like entities. Between these objects there exist implicitly a bunch of relations. Furthermore is the model continuously updated and changing of properties of this internal world model are becoming 'visible'. This fundamental space-object-relation-change structure (also called 'ontological structure' or the 'meaning dimension' or the 'semantics') is directly reflected in every known human language (citations have to inserted....).
If the engineers are translating their understanding of the stakeholder's world model into plain text then are the different meaning relations of the stakeholder not explicitly visible. Openly perceivable are only the sign vehicles used, but these can be interpreted very differently. Therefore are plain texts written in an everyday language a constant source of misunderstandings. One possible strategy to overcome this special weakness of the everyday language is the usage of those pictorial languages which represent the semantic space 'inside' the interpreters explicitly. Examples of such pictorial languages are e.g. the diagrams of the SysML language (cf. SysML OMG-2007 [149]). An overview of the main SysML-diagrams is given in figure 2.3:
Especially used during this course are:
Although such diagrams (additionally supported by textual comments) can be helpful to minimize misunderstanding these diagrams are usually not fitting well if here is a need for (automatic) formal verification. In that case one needs a formal representation which can be processed by formal mechanisms (logical derivations) or -in the case of automatic processes- or by a computer. In the latter case one uses either automatic proof procedures or model based strategies based on the language of automata (cf. Doeben-Henisch (2009)[35]).
A direct connection between the diagram language of SysML and the language of automata can be found by using the state machine diagram of SysML. This can be understood as a graph representing the structure of an automaton. This leads immediately to a formal representation.
Part of the behavior model will be a sufficient definition of the intended system interface (SI) which is needed to realize the necessary actions of the user.
Based on , the systems engineer develops a system model -also called functional specification- that fullfills the condition :
The is converted into a real system :
The process to convert (in the non-symbolic space) into formalized requirements (in the symbolic space) and the symbolic system model into the real system cannot be fully automated, because full automation is restricted to the symbolic space. The challenge of relating symbolic and non-symbolic spaces with each other also occurs during validation, when non-symbolic objects are compared with a symbolic description [].
The general structure of the behavior model can be described as a sequence of combined states . A combined state is defined by the driving task set , the participating surfaces of the user called user interface (UI), the intended system interface (SI), and the assumed environment interface (EI), thus, . A state change from a state to a state is caused by an action . Every sequence of states for which it holds that is called a usage process or short behavior of the behavior model. The complete set of all possible behaviors of is described by the generating function that maps a start state into the possible usage processes ending in the final states or goal states. A complete behavior model can then be defined as
The constraints induced by the systems engineering process challenge the systems engineer to specify the required properties of a system in terms of its observable behavior, including the interactions with the users and the environment. A nontrivial aspect of this modeling is the interpretation of the task set at least by the user . This presupposes that a single task is given as some string written in some language which can be interpreted by the user . Usually is this interpretation not part of the behavior model. But with regard to training and testing of users it could be necessary to include a complete specification of the language as well as their intended interpretation by the user. The semantic of the langugae has as its domain of reference the complete behavior model . Besides this it has to be noticed, that in the case of intelligent systems one has to assume that the behavior labeled 'intelligent' is a subset of the general behavior, thus
Gerd Doeben-Henisch 2012-12-14