On Tue, Jan 08, 2013 at 02:39:17PM -0500, Wietse Venema wrote:

> Viktor Dukhovni:
> > On Tue, Jan 08, 2013 at 01:08:21PM -0500, Wietse Venema wrote:
> > 
> > > I could add an option to treat this in the same manner as "failure
> > > to connect" errors (i.e. temporarily skip all further delivery to
> > > this site). However, this must not be the default strategy, because
> > > this would hurt the far majority of Postfix sites which is not a
> > > bulk email sender.
> > 
> > Such a feedback mechanism is a sure-fire recipe for congestive
> > collapse:
> 
> That depends on their average mail input rate. As long as they can
> push out the mail from one input burst before the next input burst
> happens, then it may be OK that the output flow stutters sometimes.

This is most unlikely. The sample size before the remote side clamps
down is likely small, so the effective throughput per throttle
interval will be very low.

If Postfix backs off initially for 5 minutes, it will fully drain
the active queue to deferred, then get a handfull of messages
through, then backoff for 10 minutes (doubling each time up to the
maximal_backoff_time). This won't push out 50k messages/day.

The optimal strategy is too send each message as quickly as possible,
but not faster than the remote rate limit, i.e. tune the rate delay.
Perhaps we need to measure the rate delay in tenths of seconds for
a bit more flexibility.

One can imagine adding a feedback mechanism to the rate delay (with
fractional positive/negative feedback), but getting a stable
algorithm out of this is far from easy.

Throttling the active queue is not an answer. With rate limits, one
wants to slow down, not stop, but throttling is not "slowing down".

Barring a clean "slow down" signal, and a stable feedback mechanism,
the only strategy is manually tuned rate delays, and spreading the
load over multiple sending IPs (Postfix instances don't help if
they share a single IP).

-- 
        Viktor.

Reply via email to