On Fri, Jun 04, 1999 at 03:20:36PM -0700, Mylo wrote:
> In fact, this whole machine is dedicated to mass mailing... But we obviously
> can't dump 2M+ files into the queue's before we start qmail-send.
This isn't obvious to me. Why not?
> We need
> some way of piping them in at just about the same rate they can go out. It's
> okay to be a little faster as they will just sit in the queue until there's
> time to send 'em out.
Actually, no it isn't. This will bottleneck qmail-send, since you're
talking about queueing each recipient separately.
qmail can send out preprocessed messages in the queue very very quickly.
qmail can add messages to the queue very very quickly.
slice from internals:
qmail-queue --- qmail-send --- qmail-rspawn
I guess some explanation about the qmail queue is necessary. qmail-queue
writes the files necessary to get the message into the "queued" state.
qmail-send decides whether the deliveries will be local/remote, and
puts the message in a state called "preprocessed." qmail-rspawn and
qmail-lspawn are triggered to work on messages which are preprocessed.
qmail-send is a single process. qmail-queue can have multiple
instanciations, and has the potential queueing more messages than
qmail-send can immediately preprocess.
What I'm not expressing very well is that using qmail-inject (a frontend
for qmail-queue) on 2M+ unique messages and recipients, you can potentially
send messages to be queued faster than qmail can preprocess the messages
and create the queue entries. When qmail gets into this state, it's
performance will degrade massively. I guess the question becomes:
how fast will you spawn qmail-queue processes? Fast enough to outpace
qmail-send's preprocessing? You'll probably want to get the messages
into the queue as fast as possible, right? Or does your mailout process
hang around for hours and hours? Is it structured to maintain state
of it's processing in the case of a failure?
--
John White johnjohn
at
triceratops.com
PGP Public Key: http://www.triceratops.com/john/public-key.pgp