Okay, now suppose that clamd works in a "complicated" way, so that
"The effect is that you don't *always* get back what you free() when you free()",
Do you have any suggestion as to how to get back the free()d memory?
Will (borrowing Apache's way) using a prefork-kind of daemon, with limited lifetime
for each child, be better (in sense of memory management) than the current
thread implementation? Or perhaps limiting the lifetime of each thread sufficient?
From experience with pthreads and Linux v2.4, pthreads was a royal pain. I initially used threads as a method of a limited lifetime model for my firewall design... I kept getting unusual and unpredictable segfaults. The process would run anywhere from 2 days to several months, then for no appearent reason,segfault in a routine that had been tested a thousand times under high stress conditions and not failed.
I confirmed that pthreads leaks memory in the management thread when calls were made to usleep. (nanosleep -> _nanosleep). GDB brought up the error after to weeks of running the process in a debug state.
After moving to fork() and named pipes, the same code hasn't broken once in nearly a year of hard testing. My tested often included 10 or more icmp floods of at least 65535 packets. I drove my load to 240 during the test...
Now the forked process uses and frees memory thousands of time per second with no issue...
Pthreads work well for light duty non-daemon processes... If its heavy duty
pgp0P22yNpEzZ.pgp
Description: PGP signature