On Mon, Nov 23, 2009 at 07:49:09PM +0000, David Given wrote:

> > When the code that's asking for more memory is running from atomic
> > context, the system can't sleep to free up more memory.
> > 
> > I.e., an atomic allocation is "Either give me the memory now or tell
> > me you don't have any, but don't tell me to wait."
> > 
> > There's no bug anywhere, this is just expected behavior.
> 
> Well, I have been seeing services mysteriously die every now and again
> - --- more so on the NSLU2 than on the SheevaPlug. When I check the logs
> there is usually one of these dumps for the process that has died. So
> there's obviously something else going on.

The kernel can end up killing processes if there is not enough memory
to satisfy a memory allocation request made by one of the processes in
the system.

This is a different thing than the network stack being temporarily
unable to allocate memory to service an incoming packet.

If you don't have enough RAM+swap for all processes on your system to
run in comfortably, you will end up seeing stack traces from the
network stack as well as killed processes.  The original poster was
seeing the former and not the latter, and just the former by itself
without the latter is not a cause for concern.

If you're seeing both temporary memory allocation failures and killed
processes, you're seeing something else than the original poster is.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to