latest contify.fun

Matthew Fluet fluet@CS.Cornell.EDU
Thu, 22 Feb 2001 11:57:58 -0500 (EST)


> I'm currently running benchmarks comparing the compile times with new
> transformation vs. the old transformation; I'll report on them tomorrow,
> although I haven't played with a G1 with the new contifier, so maybe it'll
> be bitten by the weird space problems we saw earlier.

Here are the results of the comparison.

Cols 1 & 2 use the dominator analysis with the new transformation and
record the ratio of time spent in 3 passes of contification to time spent
in closure convert simplify and total SML compile time.  

Cols 3 & 4 are the same ratios, but using the call&cont analysis with the
old transformation. 

Cols 5 is the ratio of time spent in 3 passes of contification with the
dominator analysis and the new transformation to time spent in 3 passes of
contification with the call&cont analysis and the old transformation. 

Col 6 is the same as column 5, but the new transformation is using the
call&cont analysis. 
                        
                        new     new     old     old     new     new
                        dom:    dom:    cc:     cc:     dom/    cc/
                        cont/   cont/   cont/   cont/   old     old
                        simp    comp    simp    comp    cc      cc
barnes-hut              0.103   0.013   0.088   0.011   1.200   1.200
checksum                0.333   0.020   0.250   0.020   1.000   1.000
count-graphs            0.122   0.019   0.100   0.015   1.250   1.250
fft                     0.206   0.029   0.094   0.013   2.333   2.000
fib                     0.333   0.026   0.0     0.0     inf     nan
knuth-bendix            0.149   0.019   0.106   0.013   1.400   1.400
lexgen                  0.135   0.022   0.095   0.014   1.571   1.571
life                    0.115   0.014   0.182   0.020   0.750   0.500
logic                   0.146   0.014   0.130   0.012   1.188   1.125
mandelbrot              0.0     0.0     0.0     0.0     nan     nan
matrix-multiply         0.0     0.0     0.0     0.0     nan     nan
merge                   0.0     0.0     0.0     0.0     nan     nan
mlyacc                  0.263   0.036   0.084   0.011   3.267   3.178
mpuz                    0.267   0.040   0.357   0.050   0.800   0.800
nucleic                 0.036   0.004   0.036   0.004   1.000   1.333
ratio-regions           0.179   0.029   0.085   0.013   2.143   2.143
raytrace                0.114   0.016   0.087   0.012   1.240   1.280
simple                  0.150   0.032   0.084   0.017   1.955   1.864
smith-normal-form       0.075   0.007   0.067   0.006   1.125   1.250
tak                     0.0     0.0     0.0     0.0     nan     inf
tensor                  0.216   0.030   0.068   0.009   3.200   3.200
tsp                     0.188   0.022   0.097   0.011   2.000   2.333
vliw                    0.203   0.034   0.085   0.013   2.600   2.600
wc                      0.111   0.012   0.115   0.014   1.000   1.000
zern                    0.250   0.033   0.105   0.014   2.500   2.000

Overall, the dom analysis and the new transformation is a little more
expensive.  One caveat: The new transformation corresponds to the file I
sent out yesterday; this always computes all three analyses, even if it
doesn't act on them.  Likewise, the old transformation corresponds to the
contify.fun in use before we introduced the dominator analysis;  i.e., it
computes both the call and cont analysis, but not the dominator analysis.
So, bottom line is that everything under new is computing an analysis
which is not computed by old.  Also, new is computing call and cont
analyses seperately, rather than with one pass over the program as in old.
It would be possible to do all the analyses with one pass, especially now
that I have a clear understanding of how each one works.  Finally, old
never computes Reach (and the resulting analyses aren't quite safe under
our current definition); this requires a pass over the program in new that
can't be integrated with a unified analysis pass.

I don't think that the space leak was a problem; the ratio of new compile
times to old compile times averages 1.00.  But we should still rerun the
numbers under the most recent MLton.

Finally, I'll rerun these numbers again to see which phases of the new
analysis/transformation is really increasing the time.  There might be
some finetuning that can be done in the transform routine.  Also, if
anyone wants to take a quick glance at the new contify pass and point out
any heinous inefficiencies, I'll try to work on them.