On 07/06/2010 11:30 AM, Victor Duchovni wrote:
No, disabling the cache will still leave a skewed distribution. Connection
creation is uniform across the servers, but connection lifetime is much
longer on the slow server, so its connection concurrency is much higher
(potentially equal to the destination concurrency limit under suitable
conditions, thus keeping the fast servers essentially idle).
A time-based cache is the fairness mechanism that keeps connection
lifetimes uniform across the servers, which ensures non-starvation
of fast servers, and avoids futher overload of (congested) slow servers.
I see.
I realize that email delivery is not a trivial problem, but it seems
baffling that a seemingly simple task ("fair" volume-based load
balancing between transports) is so hard to achieve.
A very dumb algorithm should accomplish it: single-threaded delivery (no
concurrency), a "voluntary" (sender-side) limit of N messages delivered
per connection, then reconnect. DNS randomization should then do the
trick. If the network and the servers are fast (and they are, in my
case), this shouldn't slow down the delivery too much (in fact, a small
speed decrease might be beneficial).
I think I know how to eliminate concurrency, but I'm lacking a
volume-based limit for the connections.
I'll keep looking for a solution.
--
Florin Andrei
http://florin.myip.org/