No subject

Henry Cejtin henry@research.nj.nec.com
Mon, 10 May 1999 00:25:21 -0400


DAMN
I have been pimped.

I  put  in  my  new  coalescer  into MLton (a very slow version) and tried to
compile the ML kit with it.  With the old coalescer there were 16,221 chunks.
With  the  new  one  there  were  12,677  chunks.   I.e.,  I  think  that the
improvement looks good.

Sadly, the compile did not actually succeed.  MLton printed out
        Bug: MLton cannot handle stack frames larger than 65536 bytes.

Are the stack frames it is talking  about  MLton  stack  frames  or  C  stack
frames?   Why would making chunks larger cause stack frames to get bigger?  I
guess I could see the C stack frame getting larger, but I don't see  why  the
MLton  stack frame would get bigger.  I guess that more is saved in the MLton
stack if there are more inter-chunk transfers, which is an argument  that  my
coalescer  might  have done a worse job than the random one, but that doesn't
seem likely to me.

Any way, my algorithm for coalescing is very simple: I just merge the pair of
classes  which  have  an  edge  between  them which minimizes the size of the
result class.  This can be done relatively efficiently with a  heap,  but  it
was  too  complicated  to  figure out how to use your heap code (given that I
don't know how well it will work) so I just wrote an O(n^2) implementation.

So, any guidance on what is causing the bigger than 65K stack frame?