[MLton] Segmentation fault in 32 bit mode on a 6 GB machine with ram-slop 0.85

Matthew Fluet fluet at tti-c.org
Sun Nov 9 12:59:14 PST 2008


On Tue, 4 Nov 2008, Nicolas Bertolotti wrote:
> One thing I did not mention is that, in some cases, we may also have 
> some allocations that do succeed but cause the process to swap and slow 
> it down so much that it would have been more efficient to work with a 
> smaller heap.

There are different ways that this could arise.  One is that that things 
like a heap's desired size is computed without taking into account the 
card/cross map, so that the ultimate allocation requests more than the RAM 
that the ML process is supposed to use.  That could probably be handled 
better in the runtime system.

The other situation is that there are other processes in the system that 
make the RAM that the ML process is supposed to use a bad estimate.  The 
runtime system alone cannot help with that.

-Matthew




More information about the MLton mailing list