Em 13/12/2011 10:05, <jcup...@gmail.com> escreveu: > That's not the problem, I think. All modern systems will let you > allocate at least ~1.5 gb before refusing malloc, no matter how much > memory you have or what other processes are doing. The trick is > keeping the working set of the processes within physical memory and > achieving that needs programs with some way to constrain their memuse.
Well, this is something I can do without uch effort, I guess. Still not sure if it is enough, but for sure will implement. > > server could is something I would like to avoid at all costs, because the > > outside problems may be temporary. For instance, malloc could fail because > > other server running on the same machine is processing 1 million > > transactions. After they are processed, the resources are already available > > I don't think that can happen on any current system. malloc() won't > fail (until you hit the per-process limit of >1.5gb), your machine > will just start swapping horribly. Actually perhaps some versions of > linux will start refusing malloc() as a way to try to escape from swap > death? But that's very extreme behaviour and certainly won't happen in > any normal circumstances. I saw a linux a process once because the machine load average was too high. In fact, I was thinking here if it pays the way to worry to much with this situation, as it is something that should happen ever. The system administrators should be able to deal with this situation. I will implement what is easier to do now, without considering this extremes. If someone starts to use the program and report this problem, I try to find a way to solve it. Thanks for the comments. Marcelo _______________________________________________ gtk-app-devel-list mailing list gtk-app-devel-list@gnome.org http://mail.gnome.org/mailman/listinfo/gtk-app-devel-list