Direkt zum Hauptbereich

Semantics, Behavior and Structure

Two persons talking about a "car" must have the same (or at least a very similar) understanding of what the term means. The meaning of the term is determined by its use in an operational context. A persons assumptions and expectations of the effects of this use express his or her understanding.

The key point here is that semantics have to do with hypotheses about things in action. A hypothesis reflects the capability of a person to predict the effects of an operation on things. As a prerequisite, we have to demand that operations on things are causal, i.e. there is a dimension of time resulting in a timely relation of cause and effect: the cause is before the effect.

To formalize things, let's say that we have entities and operations, which provide an operational context for the entities. For the purpose of experimenting with the ideas presented, we use numbers as entities and functions as operations. A function (as you know it e.g. from Scheme or Python) relates input to output with the output being the effect of processing the input by the function. The processing aspect of a function consumes time and enforces causality.

Let us take a function called "add" with the following signature:

add(number,number) -> number

The function takes two numbers as an input and computes a number. Here, we define "number" to be a positive integer including zero.

Let's assume that we do not have any knowledge about what "add" actually does. From the signature definition we just know what "number" is -- an infinite set of elements. The function "add" is the operational context in which the numbers are used. Anything we can learn about numbers is only in the context of "add". So, the question is: What is the semantics (the meaning) of "add"? What kind of knowledge does "add" unveil about numbers used in the "add"-context.

The only way to get an answer on this is to play around with "add" and build up hypotheses about cause and effect, about input and output.

add(1,2) => 3

add(2,5) => 7

etc.

Typing in some operations makes you come up with a hypothesis called "sum". In all cases you verified "sum", sum(x,y) turns out to deliver the same result as add(x,y) does. The more cases you tested, the better your hypothesis. If you have two competing hypotheses, prefer the one, which is the most simplest; this principle is known as Occam's razor. A hypothesis, which does not enable you to make a sound guess on the output produced on any kind of input, is an incomplete hypothesis.

Playing around a bit further, you might learn that add(x,y) always seems to be equal to add(y,x); a hypothesis, which you decide to call "'add' is commutative", meaning "add" is not sensitive to the order of the input given. Of course, this hypothesis is also integrated in your understanding of "sum".

You might also formulate the hypothesis that 0 is a sort of neutral element, since you observe add(x,0) = x and add(0,y) = y.

As you can see, learning is the process of formulating a working hypotheses and gaining confidence in their likelihood of being correct. The result of learning is knowledge, an established set of hypotheses. Hypotheses reflect your internalized meaning of the things you observe in action.

Let's look at another function called "addd" (note that there is a third "d"). It's signature is

addd(number,number,number) -> boolean

A boolean is a value that can be either true or false.

As you might have guessed, there is a connection to "add". And in fact, "addd" behaves quite similar. Let's give it a try:

addd(1,2,3) => true

addd(1,2,4) => false

addd(1,2,5) => false

addd(2,3,5) => true

The difference to "add" is that this function invites me to explore the structural relations of the input given, since only certain combinations of numbers are regarded as correct whereas others are not. Still, our hypothesis of commutativity with regards to the first two input parameters holds

addd(x,y,z) = addd(y,x,z)

Also, our hypothesis of "sum" still seems to be correct:

addd(x,y,sum(x,y)) => true

The interesting part now is that we can easily formulate much more hypotheses about "addd" than about "add". In the case of "add", we formulated hypotheses about behavior; in the case of "addd", we come up with hypotheses about structure.

Here are some examples of hypotheses about "addd", all of which are making statements about the relationships between the input numbers. For instance: Given two numbers x and z, we have a hypothesis "sub_y" such that

addd(x,sub_y(z,x),z) => true

Analogously, given two numbers y and z, we have a hypothesis "sub_x" such that

addd(sub_x(z,y),y,z) => true

Because of our hypothesis of commutativity, we can conclude that for a given z the hypotheses "sub_x" and "sub_y" are the same, which we could call "sub".

Another hypothesis is "pair"

pair(number) -> number, number

such that

addd(pair(z),z) => true

Take z = 3 as an example, "pair" could return either 1, 2 or 2, 1 or 3, 0 or 0, 3.

The difference between "add" and "addd" is that "add" describes a behavioral relation of cause and effect on numbers, whereas "addd" describes a relation of cause and effect of an verification process. The verification process checks, if the way "addd" relates the input given fulfills certain properties. Your were asked to formulated hypotheses about these properties!

That is, "add" is a behavioral, but causal relation and "addd" is a logical relation, behavioral just in the sense of making a logical statement after an input has been given. Functional relations describe behavior, logical relations describe structure. Behavior is a matter of time; structure is a matter of correctness. In other words, behavior has a time aspect, structure hasn't.

As can be concluded from the examples given, behavior can be easily transformed into a structure presentation; the structure can then be investigated for behavioral hypotheses. However, as we have seen, much more hypotheses can be derived from structure, because structure doesn't have any inherent notion of causality. If causality is not a constraining factor, the space to explore and investigate broadens. Several hypotheses can be formulated, which interpret the aspect of time differently in the structure. The hypothesis of "sub" can only be deduced from "addd", but not from "add"!

What all this shows is that behavior and structure are connected and that there is more knowledge to uncover from structure than from behavior.

[A side remark: If you think about implementing "addd" via "add" or vice versa, you'll end up with a backtracking mechanism systematically searching for combinations of numbers, which solve the problem at hand. This is exactly, how logical programming works, see e.g. Prolog.]

Beliebte Posts aus diesem Blog

Lidl und der Kassen-Bug

Es gibt Fehler, im Informatiker-Jargon "Bugs", die etwas anrühriges haben. Ich bat den Menschen an der Kasse bei Lidl um einen Moment Geduld und meine Kinder um Ruhe, um nicht den wunderbaren Moment zu verpassen, bei dem es passierte. Der Lidl-Mensch fluchte kurz auf -- und ich war entzückt! "Einen Moment, davon muss ich ein Foto machen!" Und dann machte ich noch eines. Ich bin heute extra für diesen Fehler zu Lidl gepilgert -- ich wollte es mit eigenen Augen sehen. Gestern hat mir ein Student (vielen Dank Herr Breyer) von diesem Fehler in einer EMail berichtet. Ein richtig schöner Fehler, ein Klassiker geradezu. Ein Fehler, den man selten zu Gesicht bekommt, so einer mit Museumswert. Dafür wäre ich sogar noch weiter gereist als bis zum nächsten Lidl. Der Fehler tritt auf, wenn Sie an der Kasse Waren im Wert von 0 Euro (Null Euro) bezahlen. Dann streikt das System. Die kurze Einkaufsliste dazu: Geben Sie zwei Pfandflaschen zurück und Lidl steht mit 50 Cent bei Ihne

Syntax und Semantik

Was ist Syntax, was ist Semantik? Diese zwei Begriffe beschäftigen mich immer wieder, siehe zum Beispiel auch " Uniform Syntax " (23. Feb. 2007). Beide Begriffe spielen eine entscheidende Rolle bei jeder Art von maschinell-verarbeitbarer Sprache. Vom Dritten im Bunde, der Pragmatik, will ich an dieser Stelle ganz absehen. Die Syntax bezieht sich auf die Form und die Struktur von Zeichen in einer Sprache, ohne auf die Bedeutung der verwendeten Zeichen in den Formen und Strukturen einzugehen. Syntaktisch korrekte Ausdrücke werden auch als "wohlgeformt" ( well-formed ) bezeichnet. Die Semantik befasst sich mit der Bedeutung syntaktisch korrekter Zeichenfolgen einer Sprache. Im Zusammenhang mit Programmiersprachen bedeutet Semantik die Beschreibung des Verhaltens, das mit einer Interpretation (Auslegung) eines syntaktisch korrekten Ausdrucks verbunden ist. [Die obigen Begriffserläuterungen sind angelehnt an das Buch von Kenneth Slonneger und Barry L. Kurtz: Formal Syn

Mit Prof. Handke im Gespräch: Vom Workbook zum Inverted Classroom

Aus dem Netz in Handkes Büro Es gibt diese schönen Momente, da führen soziale Medien zu sozialen Begegnungen im echten Leben. Ich twittere im Nachgang zur #BiDiWe16, ein Dialog mit Jürgen Handke ergibt sich, er schickt mir seine Telefonnummer, ich rufe sofort durch, wir verabreden uns. Drei Tage nach der #BiDiWe16 sitze ich bei Handke im Büro, das gleichzeitig sein beachtlich ausgestattetes Aufnahmestudio beherbergt. Es ist Freitagmorgen, 9. September 2016. Jürgen Handke ist mir kein Fremder. Ich habe zwei seiner ICM-Konferenzen besucht, auf der #BiDiWe16 in Berlin hielt er die Keynote. Er hat für seine Lehre Preise erhalten, zuletzt 2015 den Ars Legendi-Preis für exzellente Hochschullehre. Zugegeben, ich hadere mit dem Konzept des Inverted Classroom -- auch Flipped Classroom genannt. Meine Erfahrungen mit der Programmierausbildung von Informatik-Studierenden des 1. und 2. Semesters lassen mich zweifeln. Videos habe ich auch schon produziert, aber vor allem das selbstgesteuerte