Joe Maimon wrote:

Since when was this decided to be a good idea? Suppose I am running clamd under ulimit to control its memory usage. I dont want it to die on out of memory issues caused by scan jobs, making it unavailable for possible jobs that wont cause OOM and terminating all other scanning jobs. I want it to recover gracefully and continue scanning other jobs. Assuming there is no memory leak, recovering gracefully means possibly aborting the current scanning job (or not expanding the current file format or similar task) and leaving all the other jobs and threads alone.


Maybe I'm naive, but I would have thought there should be a "worst case" scenario. i.e. if you were scanning some large directory filled with zipp'ed 50M powerpoint documents (or something), then clamd would need to grow to (say) 70Mb to get the job done.


But it should never grow to 90M.

So setting the memory limits of clamd to 90M should only cause a OOM error when a memory leak exists.

If you can't predict the max amount of memory required by clamd, then that means it might grow to 50000Gb on a bad day - and that means your OS crashes (however you want to define that) - and that is *definitely* worse than a OOM error...

I have had setting ulimit before starting daemons save my OS time and time again over the years with other daemon services (had a classic with Squid - but I digress), so wanting to implement it with clamd isn't as way out as you think.

Jason
_______________________________________________
http://lists.clamav.net/cgi-bin/mailman/listinfo/clamav-users

Reply via email to