[PRL] structural operational semantics

Matthias Felleisen matthias at ccs.neu.edu
Tue May 23 18:35:28 EDT 2006


You figured it out, mostly.

My understanding of the history is this:

1. Plotkin '75 does inference rules for specifying two semantics of 
lambda syntax. He also uses inference style to generate the equations 
of the two calculi.

For the semantics, he uses a restriction on the syntactic positions 
where inference rules can apply. That way, beta or beta-value can only 
be discharged at certain positions in the derivation tree. In 
particular, you can't discharge a beta or beta-value under a lambda.

For the calculus, he allows the derivation to proceed at arbitrary 
places. In particular, he allows calculations to proceed under a 
lambda:

    M --> M'
---------------
  \x.M --> \x.M'

[1a. He then shows that beta generates the lambda calculus in a uniform 
manner (left most outermost) and beta-value generates a DIFFERENT 
lambda calculus (same uniform rules). If you read this paper properly, 
he invites others to figure out something like this for other 
programming languages. And that's how I started my dissertation work.]

[1b. Barendregt (1978?) defines the reduction theories and equational 
theories in a similar manner, for plain lambda\beta\delta and related 
calculi. He also shows that one could use arbitrary contexts to 
generate the calculus from binary relations.]

2. Kahn, Milner, and others recognize (in the late 80s) that they need 
an alternative to denotational semantics. It's just not doing the 
expected things for them, especially now that they see how to specify 
types, type checking rules, and eventually type inference rules. The 
algorithmic implementation should then be a provably equivalent 
function. Somehow people think it is easier to read/write inference 
rules than recursive functions.

The idea comes about that one should specify the operational semantics 
in a manner that is similar to the type checking (inference, 
restoration) because this create a "pleasant symmetry."

In a sense, this is just a straightforward expansion of Plotkin 75.

3. The catch is that everyone knows that "denotational semantics is 
structural" and therefore you can do "structural induction" if you want 
to prove things. At that point, people are thinking "structural 
induction" is the key to proving all language properties. In my first 
week in my first Indiana course in 1984, Mitch taught us structural 
induction. In the third or fourth week, I taught it (from his discrete 
structures book) to sophomores.

Unfortunately, people think that operational semantics as presented 
isn't structural at that point. Why? Because it uses substitution, 
which is evil: it generates new terms as the whole thing proceeds. See 
Plotkin '75.

4. Plotkin goes on sabbatical in 1980 and continues developing his idea 
of an operational, which he presented at workshops informally. The 
idea: you specify transition systems but you engineer them very much 
like you engineer a denotational semantics (turn contextual information 
into accumulators: environment, stores, continuations, exception 
stacks, etcetc.) His discovery as I recall from a transcript of one of 
those workshops is that you can use inference rules with turn styles to 
accumulate context:

  rho[x = e] |- e' -> v
------------------------ [The astute reader will notice that this is 
sloppy!]
  rho |- (\x.e)e' -> v

He says "you can shift the argument to the left of the turn style and 
thus preserve the structure."

The rest of his tech report (Paul alluded to it) describes how to deal 
with stores and concurrency (using labeled transition systems etcetc.)

5. I discovered evaluation contexts in 1985, calling the "applicative 
contexts" at first. Initially it was just a technical device to show 
that all of these languages satisfy the Plotkin-75 "programme" 
(Gordon's words). The device helps prove Curry-Feys Standardization 
theorems.

Over the following two or three years, I developed many different 
semantics using this style, calling them evaluation contexts because I 
realized it had nothing to do with applications. It just happens that 
applications are the only evaluation mechanism in the LC. As I started 
exploring assignment and exceptions in addition to continuations, it 
became clear that we need a general distinction: values and 
computations. [If I had known more math, I might have stumbled onto the 
computational calculus then; in a sense Bruce and I did, but we didn't 
know until I met Curien.]

So the calculi development was parallel to the semantics development.

5a. As several of my papers and especially the one at PARLE (Euro Sys) 
point out, the whole idea of developing an "calculation semantics" was 
that one can reduce many things in parallel -- all are provably equal 
to the E semantics and therefore correct.

5b. I also developed an evaluation context semantics for engines, using 
many holes in evaluation contexts. There is no need for an evaluation 
context semantics to be deterministic. I never published this. No time.

5c. Cormac and I published a related idea ten years later, at the 
suggestion of Bert Halstead: an evaluation context semantics for 
futures.

[I just saw that Richard is making this point in a parallel message.]

There is more to all of this but this is enough -- Matthias







More information about the PRL mailing list