:...
:> : merging such a feature?
:>
:> Assuming that it doesn't break anything, that it doesn't introduce a
:> severe performance penalty and works, there would be interest. There
:> are times that this is a desirable feature.
:
:This thread reminds me of what happened when I brought up the same
:issue a few years ago, arguing that the kernel shouldn't overcommit
:memory (i.e., the same thing, everybody though I was nuts :)
:
:For me it helps to understand people's underlying motivation. Here's
The memory overcommit thread comes up once or twice a year. So this
time around I am going to try to take a different tact in trying to
explain the issue.
One could argue about making the OS not overcommit until one is blue in
the face. One could argue that every single routine that allocates
memory must be prepared to handle a memory failure in a graceful
manner. One could argue all sorts of high-and-mighty value things.
But its all a crock. It simply isn't possible to gracefully handle
an out of memory condition. All sorts of side effects occur when
the system runs out of memory, even *with* overcommit protection.
In fact, all sorts of side effects occur even when the system
*doesn't* run out of memory, but instead just starts to thrash swap.
All sorts of side effects occur if the system starts to get cornered
memory-wise even if you don't *have* any swap. The number of possible
combinations of effects is near infinite and nearly impossible to
program against. Simply put, the 'overcommit' argument doesn't
actually solve the problem in any meaningful way. Any significantly
sized program that actually handled every possible combination and
side effect of an out-of-memory condition gracefully would have so much
conditional garbage cluttering it up that the rest of the code would
be obscured in a haze of if()'s.
There ain't no magic bullet here, folks. By the time you get a memory
failure, even with no overcommit, it is far too late to save the day.
There is only one correct way to handle an out of memory condition and
that is to make sure it doesn't happen... so when it does happen you
can treat it as a fatal error and scream bloody murder. It's a whole lot
easier to design a program with bounded, deterministic memory use (e.g.
database X requires Y kilobytes of memory per client instance) and
control that use at the edges rather then to try to deal with memory
failures gracefully in the deepest darkest corners of the program.
And it's a whole lot more reliable, too.
When I write such a program, if I care about bounding memory use
I do it at the edges. I don't pollute low level allocation routines
(like strdup(), small structural allocations, etc...) with all sorts
of conditionals to gracefully back out of a memory failure. It's a huge
waste of time. I just wrap the routines... safe_strdup(), safe_malloc(),
safe_asprintf()... and the wrappers scream bloody murder and exit if
they get a failure. It's that simple... and far more reliable.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message