[PRL] Bob Harper's new blog

Joe Marshall jmarshall at alum.mit.edu
Thu Mar 17 13:20:28 EDT 2011


> On Mar 17, 2011, at 11:44 AM, will at ccs.neu.edu wrote:
>>
>>    In current OO languages, loops are expressed without using calls.

On Thu, Mar 17, 2011 at 9:27 AM, Matthias Felleisen
<matthias at ccs.neu.edu> wrote:
>
> I don't understand this reason.

This takes a *huge* burden off the compiler because it requires the
programmer to manually optimize tail calls into forms that are
trivial to compile.  The remaining calls are much more likely to be
`hard to compile efficiently' for any compiler, so the compiler
writers are able to claim `we perform well on function calls' when
in fact they completely punt on the easy cases and make the
user do the work.

(As an example, in Scheme it is easy to write a tail call to
a statically known procedure where the dynamic environment
is not known until runtime.  Basically, you make a tail call
within the same procedure but across lexical scopes.  Good
lisp compilers handle this efficiently.  You can't even express
this in Java.)

>
> For all we know a 'applicative' OOPL could be equipped with the
> same kind of analysis an FPL has to compute call targets. Perhaps
> this is a worthwhile research topic and -- if successful --
> would help promote 'applicative' programming and -- if a failure --
> would scientifically establish the inferiority of a certain class
> of OOPLs.

The dynamic dispatch of Simula-67 style OOPL is much more
restricted than a procedure call.  The signature of the target
method is known (and checked) at compile time.  Furthermore,
the method name is always a static literal.

Basically, a method call like this:
    foo.callMethod (arg1, arg2)
is most generally equivalent to this:
    (apply (find-method (runtime-class foo) 'callMethod) foo (list arg1 arg2))

But since the runtime class of foo can be approximated at compile time,
and the signature of callMethod is known at compile time, this can usually
be optimized to
    ((vector-ref (method-table foo) <constant>) foo arg1 arg2)
and the vector-ref and method-table lookup can be memoized or, if the
class hierarchy isn't too confusing, simply optimistically compiled:
   (if (eq? (runtime-class foo) <compilers-best-guess>)
       (<expected-method> foo arg1 arg2)
       ((vector-ref (method-table foo) <constant>) foo arg1 arg2)

Almost *all* the time, the compiler's best guess for the runtime-class
is provably correct or extremely likely to be correct, so the dispatch
is `efficiently compiled'.

Since this is the `meat and potatoes' of this style of programming,
the compiler does a pretty good job.  But as you can see above,
the restrictions naturally imposed on the user by the language make
it easy for the compiler to treat these as an extraordinarily fruitful
`optimization'.

To model a first-class procedure in this style takes more machinery.
You need to define an interface that is shared between the caller
and callee, implement the code body of the interface as a `virtual
function', create a class to hold the lexical variables (the compiler
will do this for you if you use an `anonymous inner class', but
it is done nonetheless) instantiate an instance of that class at
the call site and transfer to the higher-order function that uses
this procedure.  The higher-order function will presumably be
called from multiple call sites, so all call sites have to construct
the `closure' in a generic manner so the HOF can destructure it
when the closure is invoked.

This kind of `unknown' call is hard to optimize, and cannot be
optimized in the fully general case.  However, a `good' Lisp compiler
can detect a number of useful special cases and can often do
a surprisingly decent job of compiling them.  I've never seen a
Java compiler do *anything* with this kind of call except to
invoke `apply' at runtime.

And you cannot program effectively with higher-order functions
in Java;  the type system is just too weak.  You can simulate
closures with classes, but if you attempt to do anything much
more difficult than mapcar, or fold-left, you'll discover that the
Java type language cannot express the necessary constraints.

So although an OOPL compiler could be equipped with the kind of
analysis you describe, the language itself is so constraining
that you cannot easily express the constructs that could take
advantage of those techniques.


>
> Either way it sounds like a worthwhile exercise for scientists.
>
> Warning: purely social conventions will probably ensure that this
> research is irrelevant for the future of compilers.
>
> -- Matthias
>
>
>
>
> _______________________________________________
> PRL mailing list
> PRL at lists.ccs.neu.edu
> https://lists.ccs.neu.edu/bin/listinfo/prl
>



-- 
~jrm



More information about the PRL mailing list