[MLton] RE: card/cross map in heap

Nicolas Bertolotti Nicolas.Bertolotti at mathworks.fr
Thu Jul 17 04:35:27 PDT 2008


> But, only if doing the dump really gets you a larger heap.  If the program
> has live data close the the maximum contiguous map that mmap can allocate,
> then you will not only GC whenever allocating a few bytes, but you will
> also dump the heap to disk.  In that situtation, you are actually better
> off sticking with the existing heap, because paging to disk is so
> expensive.

I guess it works fine in my case because the SML binary is basically the only binary that is running on the machine at that time (we always use dedicated servers).

I don't think there is a way to predict what mmap() would be able to allocate if we dump to disk without actually dumping.

> > In order to solve the issue, I have added the following piece of code at
> > the beginning of the remapHeap() function (see the attached patch, based
> > on the one you previously sent):
> >  if (desiredSize > minSize && h->size == minSize)
> >    minSize = align(minSize + backoff, s->sysvals.pageSize);
> > I am not sure it is the best thing to do, but it works fine in my case.
>
> This simply demands 'proper' growth from resizeHeap when the current heap
> is minSize.  As far as I can tell, this change will simply cause remapHeap
> to bail out before attempting to mremap to the original minSize (= current
> heap size).  Depending on rounding, it might attempt a mremap at a size
> that is a few pages smaller than desiredSize - 15*backoff, but it would be
> unusual for one the to succeed and the other fail.

Not just a few pages smaller !

With the patch, when the heap is 2 GB and desiredSize is 3 GB, remapHeap() will fail if it is not able to remap to a 2.05 GB heap which is not that much.

The patch basically guarantees that remapHeap() does not succeed when we request some additional bytes and we are not able to allocate at least 1/20 of those bytes.

>
> With the above patch, what behavior do you see?  Does it go
>    remapHeap fail; createHeap fail; write heap; createHeap succeed; read
> heap
> or does it go
>    remapHeap fail; createHeap succeed
> And what are the sizes of the heaps that fail and succeed?

I see the first behavior. desiredSize is about 3 GB and minSize is about 2 GB on a 4 GB machine. Before the disk dump, we fail to allocate those 2.05 GB. After the disk dump, we can allocate the 3 GB.

>
>
> I'm not sure that there is a really principled way to make the decisions
> about keeping vs dumping the current heap.  What we really want is the
> ability to ask the question: if I unmap this memory, can I mmap this size?
> It seems that mremap should nearly approximate the answer to that question
> (since it is allowed to move the base address of the mapping), but your
> experience above seems to suggest that there are situations where we
> cannot mremap from the current heap size to even a slightly larger heap,
> but if we unmap (a portion of) the current heap (either by shrinking it or
> paging it to disk and releasing it), then we can mmap something larger.

I can easily reproduce the problem by compiling one of our binaries on a specific machine with a version of MLton that does not include the patch. I guess I investigate a bit more in order to identify the reason why we are able to allocate a larger area after paging to disk.




More information about the MLton mailing list