On Sat, 2007-02-17 at 02:07 +0100, Mark Martinec wrote:
> On Saturday February 17 2007 01:49, Matthew Wilson wrote:
> > I was/am primarily concerned with RAM usage for high-concurrency
> > situations.
> 
> Ok. Still, in my experience about 30 (maybe 50) SA processes can
> fully utilize today's CPU & I/O, and it's probably no big deal
> to provide about 2 GB of memory to cater for such system.
> Also, and unfortunately, multithreading in Perl is rather
> cumbersome and not significantly less expensive than fully
> individual processes.

After experiencing with the sa-blacklist.cf some time ago with 45
process brought my system to its knees with 3.5GB (out of memory).  

I agree about the thread model.

But sticking to a async I/O model is a valid point.  If implemented
correctly it will save a lot of memory and even improve performance a
little.

Having separeted process saves the need to have to check for garbage
after filtering a message, which will cause the code to have to be
recheck.  

However, for uniprocessor systems, having multiple process running is
actually more expansive than a async I/O one.  For multiple process
system, just keep one process for cpu or less.

In the past I have played a lot with perl-loop (any loopers around?)
which was the only way to go.  It is too low level for most people, but
perhaps POE is the way to go today (which can use perl-loop as its
base).

-Raul Dias

Reply via email to