On Sat, 18 Sep 1999 14:37:06 +0200, Marcin Owsiany wrote: [...] >> I'm pretty sure the reason why the processes fail is that memory usage is >> too high (it's *definitely* not due to memory problems, like failing RAM >> modules or overclocked CPUs.) Memory usage is permanently about 99%, swap >> usage only a few percent. But obviously processes are dying because they >> can't allocate "real" memory?! > >As far as I know the Unix processes don't care if something is physical >memory or not. They simply use virtual memory and it's kernel's job to >proviede it to them.
Well, this is also my information. Seems to be common knowledge. :) >As for the memory amount, i don't think that running a box at its full >memory capabilities does something bad to Linux. I myself used to administer >a box which had as little as 8 MB of RAM (Pentium 90Mhz) with the following >services: httpd, ftpd, squid and sometimes even X! For about a year I ran a similar machine: It also acted as an answering and fax reception machine. Everything was fine, even with only 8 megs. >It was trashing horribly, but was usable and hardly ever crashed. >So it's not the amount of physical memory you have.. i'd rather think of >making more swap (don't know how much your box has...), since some peaks in >memory usage can be lethal to Linux. This is what I'm currently blaming for the dying of the processes. I have drastically increases swap sizes, and I will observe whether this solves my problems. Thanks, Ralf -- Sign the EU petition against SPAM: L I N U X .~. http://www.politik-digital.de/spam/ The Choice /V\ of a GNU /( )\ Generation ^^-^^