Chris Angelico <ros...@gmail.com> writes: > ... > Right. Everything needs to be scaled. Everything needs to be in > perspective. Losing 1 kilobit per day is indeed trivial; even losing > one kilobyte per day, which is what I assume you meant :), isn't > significant. But it's not usually per day, it's per leaking action. > Suppose your web browser leaks 1024 usable bytes of RAM every HTTP > request. Do you know how much that'll waste per day? CAN you know?
What I suggested to the original poster was that *he* checks whether *his* server leaks a really significant amount of memory -- and starts to try a (difficult) memory leak analysis only in this case. If he can restart his server periodically, this may make the analysis unnecessary. I also reported that I have undertaken such an analysis several times and what helped me in these cases. I know - by experience - how difficult those analysis are. And there have been cases, where I failed despite much effort: the systems I work with are huge, consisting of thousands of components, developed by various independent groups, using different languages (Python, C, Java); each of those components may leak memory; most components are "foreign" to me. Surely, you understand that in such a context a server restart in the night of a week end (leading to a service disruption of a few seconds) seems an attractive alternative to trying to locate the leaks. Things would change drastically if the leak is big enough to force a restart every few hours. But big leaks are *much* easier to detect and locate than small leaks. -- http://mail.python.org/mailman/listinfo/python-list