[MLton] Segmentation fault in 32 bit mode on a 6 GB machine with ram-slop 0.85

Nicolas Bertolotti Nicolas.Bertolotti at mathworks.fr
Mon Nov 3 17:12:50 PST 2008


> What platform is this?

The platform is Linux (64 bit OS but my binary is compiled in 32 bit mode).

> Linux has a 3G cap, MinGW a 2G cap, and cygwin a 1.2G cap. Perhaps
> there is some system call which can inform this choice in a portable
> way?

As you can see in the log, before the heap allocation that leads to the crash, the heap size was 3,854,442,496 bytes which seems to indicate that the 3G cap is not an absolute limit (at least on a 64 bit OS).

I was surprised too. I had never seen that before on the 4G machines I usually have access to. I thought this was impossible.

> I pointed out a related problem on Oct 14
> <http://mlton.org/pipermail/mlton/2008-October/030413.html>:
> > Another thing I've noticed is that MLton's estimate of the available
> > RAM can trick it into making bad choices for the heap size. You can
> > only possibly get 1.2G on cygwin, but MLton thinks it should be able
> > to get an amount = physical ram. It sets the minSize and desiredSize
> > both too high and is the remap then fails. However, if it had a less
> > aggressive minSize, it could have allocated another 300MB. I'm not
> > sure what to do about this. Perhaps cap the physical RAM based on the
> > platform?
>
> I had hoped to generate some discussion, but it seems no one was
> listening. :-)

As far as I know, the fact that desiredSize can be too high is not a big issue. After a number of backoffs, mmap/mremap succeeds and the only consequence is that createHeap/remapHeap runs a bit slower.

I have never encountered any issue with a too high value for minSize. If there were some, I guess they would have led to some unjustified "Out of memory" crashes and the only ones I have experienced experienced so far were caused by the fact the card/cross map was allocated after the heap itself (which is not the case anymore).

Anyway, I spend my time trying to improve memory handling and there are still some open directions.

I sent an e-mail about those (see http://mlton.org/pipermail/mlton/2008-September/030358.html).

One thing I did not mention is that, in some cases, we may also have some allocations that do succeed but cause the process to swap and slow it down so much that it would have been more efficient to work with a smaller heap.

Also, in multi-core context, when 2 processes are running, it is sometimes more efficient to suspend the execution of one process after paging its heap to disk. Then, the second process won't swap and finish the job in a shorter amount of time.

Those are just questions for now...

Nicolas



More information about the MLton mailing list