[MLton] cvs commit: better blocking/unblocking of signals in runtime and basis

Matthew Fluet fluet@cs.cornell.edu
Fri, 16 Apr 2004 09:10:20 -0400 (EDT)


> I'd love to have some input from others here, but I don't know of any
> reason why sigprocmask would behave any differently while in a handler
> than in normal code.
>
> What happens in the C world is that when the handler is installed one
> specifies the signals that one wants to be blocked whenever the
> handler runs (implicitly, this always includes the signal being
> handled).
>
> On the other hand, in our SML world, one does not specify anything
> when the handler is installed about signals to be blocked whenever the
> handler runs.  Instead, all signal handlers run with all signals that
> are handled by ML blocked.

Well, already we have a situation where something happens in the SML world
that cannot happen in the C world.  Namely, that the sigset_t sa_mask of
blocked signals is set dynamically at the handling of a signal, and not
dynamicaly at the setting of the signal handler.

> One reason why we did this is that the signal handler thread loops
> over gcState.signalsPending, running a handler for each signal whose
> bit is set.  Once all the handlers have been run, we call
> GC_finishHandler, with empties gcState.signalsPending.  So, we could
> lose signals if we allowed gcState.signalsPending to be modified while
> we are running handlers.  We could cut down the period when signals
> need to be blocked dependency a lot by having the signal handler
> thread first loop over signalsPending and collect all the signals that
> need to be handled.  It could then empty signalsPending and unblock
> signals (restoring the mask to whatever it was when we first blocked
> at the start of the signal handler).

That sounds reasonable.  We could further cut down on the time that
signals are blocked by copying gcState.signalsPending to another sigset_t
which would be queried by the ML side.  That moves the loop out of the
blocked time.

> In fact, maybe we could get rid of {block,unblock}Signals from
> enter/leave.  We are using them right now to prevent GC_handler from
> mucking with limit while we are running C code.  What if instead we
> treat enter/leave as a critical section, with enter doing canHandle++
> and leave doing canHandle--.  Then GC_handler will do the right thing.
> And we won't need to keep track of gcState.signalsBlocked at all.  And
> we won't need to keep an ML-side representation of the signal mask.
> And the calls to Signal.Mask.{block,unblock,setBlocked} can go back to
> using sigprocmask with the appropriate how, doing no other
> computation.
>
> How does that sound?

Treating time in the runtime as time in a critical section seems
reasonable.  It will mean that in GC_switchToThread we will be switching
to the handler thread when canHandle == 1, not canHandle == 0.