mem usage

Henry Cejtin henry@sourcelight.com
Sun, 4 Nov 2001 12:57:32 -0600


It is all a bit complicated since things basically depend on what kind of job
mix you care about.  If it is mainly an interactive load, then it makes sense
that  you  want  to keep a good chunk of memory `available' so that when some
interactive job needs to run you don't have to swap something else out.   The
case  of  big  batch  compiles,  like self-compiles, is exactly the opposite.
>From the point of maximizing throughput you want to run the jobs sequentially
to  completion  (except  for  using  multiple  processes  to  hide  some disk
latency).

The out-of-memory errors are really the run-time-system's faults since if  an
mmap fails it should try to scale back the amount and live with it instead of
dieing.  This still isn't a great solution since then you  are  running  with
all  the memory you could get so all other processes are going to get nothing
(more or less).

It looks like we should scale back a bit.  Maybe change the magic RAMSLOP  to
80%?