> > We regularly send out newsletters to over 500,000 email address's on
> > a weekly/fortnightly basis.

(snip)

> I can get about 50K emails per hour using 400 remotes.  That would take 10
> hours with your list.  The qmail queue size seems to stabilaze between 10K
> and 15K during the run.  There is usually between 300-400 remotes running.

  OK, first, please forgive me for jumping in.  I see this sort of question
occasionally, and it makes me wonder why so many steps were necessary.  At
one time, I had to send out just over a thousand messages to recipients
across the Internet, and ran the Perl script (calling qmail's sendmail
wrapper) on my lowly machine.  It was a Celeron 450, with a 512k connection,
an IDE drive, and concurencyremote/local set to 100.  The Perl script
finished within 30-45 seconds (I don't recall the actual time), and the
queue had died down to ~10 undeliverables after just barely over a minute -
which would be about 50,000 per hour, but without any real efforts to speed
it up.

  So, it makes me wonder what the culprit is for persons trying to send many
times that number of messages.  Anybody want to enlighten this poor soul?
It makes me wish I had a reason to send out that many again, so that I could
experiment a little.

> I am currently experimenting with bypassing qmail altogether.  Using
shared
> memory I fork off 1000 child processes to chew on a list in parallel.
Each
> child will call qmail-inject only if its direct attempt fails.  My testing
> is still rough but it looks like I can get a 5 to 10 fold improvement.

   Nice!

steve


Reply via email to