speeding up IntInf?

Matthew Fluet Matthew Fluet <fluet@cs.cornell.edu>
Mon, 18 Feb 2002 17:53:46 -0500 (EST)


> At the moment the C code expects that it can roll back the heap frontier
> and the ML code allocates arrays which are the maximum required size.  I could
> add a check and do something else if the last allocated item wasn't the array
> that was passed to the C code.  It would slow things down though.

No; the idea would be to have the IntInf code do it's own allocation.
What would change is that the IntInf_add primitive would somehow indicate
to the limit check insertion pass which of it's arguments contained an
integer that corresponded to the maximum amount of heap space it would
require.  That is change

            val + = _prim "IntInf_add": int * int * word array -> int;
to
            val + = _prim "IntInf_add": int * int * Int.int -> int;
And also have
            val size = _prim "IntInf_size": int -> Int.int;

Now the C for IntInf_add would also get the frontier (but not the
Int.int), but would be responsible for it's own allocation (i.e., put down
the right header, etc.)  Presumably the IntInt structure implementation
would use size and Int.max to compute the maximum size of the resulting
IntInf (as it does now with array lengths).

What changes is that the limit-check insertion pass would extract the
Int.int from allocating primitives and put in a limit check.

In reality, we'd want to implement IntInf_size in assembly just as we do
with isSmall and areSmall.