contification paper

Matthew Fluet Matthew Fluet <fluet@CS.Cornell.EDU>
Sat, 17 Feb 2001 18:09:15 -0500 (EST)


> So, when I wrote
> 
> > How about we just use Reach in the definition of safety.  I.E.
> > 	* if not(Reach(f)) then A(f) = Uncalled
> >
> > After all, reachability is a trivial concept and any sane analysis would do this 
> > (okok I know an old implementation of the contifier didn't :-).
> 
> I was talking about the analysis phase, not the transformation phase.  At some
> point you found that the old cont analysis did not meet this safety criterion.

Yes.  Actually, both the old call and old cont analyses would not meet
this definition of safety.

> When you said this
> 
> > Actually, neither does the new version I wrote up.  Which does mean that
> > it could be potentially cyclic:
> 
> I believe you were referring to the transformation phase not being able to
> handle analyses that didn't meet the above safety criterion.

Actually, I meant both.  The new versions of call and cont that I wrote
(essentially the same as the old versions, but they just use the RCont.t
datatype instead of the CallInfo.t and ContSet.t datatypes) worked just
like the old ones, and would not necessarily be safe.

In addition, the new transformation can "fail" in one of two ways when the
input analysis is acyclic.  Recall that I build the following graph to
guide the processing of nodes:

 Node = Jump U Fun
 Edge = {(f, g) | A(g) = f} U
        {(j, g) | A(g) = j}

So, if this graph is acyclic, it's because of something like:

fun f () = ... g () ...       ;    A(f) = g
fun g () = ... f () ...       ;    A(g) = f

and the acyclicity is discovered when I do a DFS on the graph.

On the otherhand, the following is also "acyclic":

fun f () = ... K1 (g ()) ...  ;   A(f) = K2 
fun g () = ... K2 (f ()) ...  ;   A(g) = K1

but won't be caught when I do the DFS.  Really, there should be a set of
edges {(f,j) | (f, g, j) in N}, but that means doing a Exp.foreachCall on
each function body, and you really should distinguish between Jump.t's
that occur in the analysis and one's that don't, because the graph would
have lots of extraneous nodes.

The other way that the transformation could "fail" is to not perform
exactly what is indicated in the analysis.  I was actually wrong about the
transformations ever entering an infinite loop.  Take the second example
above.  In both the old and new transformations, f and g would have their
"replace" fields filled in, so when doing the final pass over the
functions, they wouldn't be included in the resulting set of top-level
functions.  At the same time, since their bodies aren't walked, we never
find the local functions on which f and g are to be prefixed, so no
contification takes place.

Note, this is just a side-effect of the way we try to efficiently do the
transformation.  If we were being less efficient and doing each
"contification" as we processed the graph above, then we could get
erroneous results.

> OK, we're on the same page.  The new contifier looks like
> 	1. Compute Reach
> 	2. Analyze
> 	3. Transform
> where only (2) is dependent on contifyStrategy.

Correct.  In slightly more detail it looks like:
        2.1. Analyze Call
        2.2. Analyze Cont
        2.3. Analyze Dom
        2.4. Diagnostics
        2.5. based on contifyStrategy, update the "rcont" field used by
               Transform with the results of the appropriate analysis