[MLton] max-heap setting for 64-bit applications

Matthew Fluet matthew.fluet at gmail.com
Fri Dec 11 23:48:17 PST 2009


On Fri, Dec 11, 2009 at 7:17 PM, Wesley W. Terpstra <wesley at terpstra.ca> wrote:
> Stepping outside of the current discussion, a matter of practicality: is the
> paging really that bad? AFAIK, it is a memcpy running between the two heaps
> which is sequential access. It shouldn't take longer than it would take to
> copy a 3GB file on the disk.
>
> If it *is* taking longer, then we need to add a hint to the windows VM to
> tell it that we will be doing sequential access before the memcpy, then flip
> it back to random-access mode.

It isn't quite sequential access.  See growHeap in
<src>/runtime/gc/heap.c.  We actually perform the copy from back to
front, with a memcpy (presumably doing sequential access) for each 32M
chunk.  The reason for starting the copy from the back is that we can
shrink the old heap as we go.

> On Fri, Dec 11, 2009 at 9:32 PM, Matthew Fluet <matthew.fluet at gmail.com>
> wrote:
>>
>> Right, and the annoying bit is that the previous heap was so close to
>> the max heap setting.  Perhaps a reasonable heuristic is that if a
>> desired heap is "close-to" the max-heap size, just round up.  Perhaps
>> 0.75 of max heap?  In the max-heap 3G setting, this could still leave
>> you in the situation where you have a 2.25G allocation and a 3G
>> allocation at the same time to copy.  Or 0.55 of max heap; that could
>> require 1.65G+3G at the time of the copy.
>
> I would be against yet another special case in the sizing rules. Any cutoff
> we pick is going to fail for someone else the same way, while artificially
> restricting the memory growth for others. His problem would be (mostly)
> fixed if we flipped windows mremap to only move if growth fails.

That assumes that the inplace growth can succeed.  (Agreed that on
amd64, there is a good chance that it will succeed.)  If the inplace
growth cannot succeed, then runtime will attempt a moving growth (via
an explicit create and copy), and we still overcommit memory (relative
to the max-heap setting).

> I have a higher-level solution proposal: MLton.GC already has hooks that are
> executed after a GC to implement finalizers. Expose these to the user. If a
> user knows his application only consumes X memory on an error condition, he
> can test for this after a GC and terminate with an "Out of Memory" error as
> desired.

You still need to get out of the GC before the GC signal handler runs.

>> the mremap function is described as using the Linux page table scheme to
>> efficiently change the mapping between virtual addresses and (physical)
>> memory pages.  It's purpose is to be more efficient than allocating a new
>> map and copying.
>
> If I could ...

I guess my point is that the way that you indicate that you aren't
more efficient than alloc/copy is by not providing mremap.  Everything
else in the generic implementation with attempting the in place expand
is more efficient; it is just the starting off with the alloc/copy
that doesn't seem to make sense.



More information about the MLton mailing list