[MLton] More on Parallel Runtime

skaller skaller at users.sourceforge.net
Mon Oct 22 12:55:56 PDT 2007


On Mon, 2007-10-22 at 15:34 -0400, Eric McCorkle wrote:
> On Oct 20, 2007, at 9:13 PM, skaller wrote:

> > But very inefficient, even if you can optimise placement of safe- 
> > points,
> > you will need one in every tight loop. This is probably not  
> > acceptable.
> 
> That's an implementation detail.  

Not really: it's a fundamental problem that you want safepoints
based on some time granularity, but you can only effect that
with code granularity.

Pre-emptions are based on time granularity as required.

In fact Felix gc has 'safe-points', and its a real problem
because there aren't enough of them. It needs to do a world
stop, and the safepoints are actually allocations, but
a CPU doing a matrix inversion may not need to allocate
for hours.. ;(

So if you come up with a way to calculate how to insert enough
safe points without it costing too much I'd be really interested!

> A way around this specific case  
> might be to unroll the loop, or only check the mailbox every 16th,  
> 32nd, or whatever-th loop execution.  The point is, I'm assuming safe- 
> points occur "at reasonable intervals".  It should be possible to  
> build a smart safe-point generator that weaves them into pipeline  
> stalls, ensure no more than some k cycles pass between each safe- 
> point, et cetera.

I would be really interested in seeing that! I have one idea,
which is very nasty ..

Keep a table of safe points OUTSIDE the code. The code runs,
without executing a safe point. Then a timer fires off.
In the signal handler, lookup the table, find the next
safepoint(s), and insert break point interrupt instructions
physically in the code. (this is more or less what debuggers do).

Although the calculation is expensive, it only happens every
so often, eg every 1ms so the cost can be amortised to as
small a % of total CPU time as you like.

-- 
John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net



More information about the MLton mailing list