On 01/25/2018 05:58 PM, Viktor Dukhovni wrote:
This is not good advice, it breaks delivery to other domains.  Much better
to run a local caching resolver.  Note also that the OP reports that raising
concurrency does not improve throughput by much.  If DNS lookups were slow
higher concurrency would lead to a significant throughput increase.

+1

In the dim, dark past, when I was mail administrator for a hosting company, I configured a PostFix instance (bare metal, not VM) that smart-hosted (I'm guessing) 40-50 instances of qmail and exim in Web control panel systems. The outgoing mail volume was on the order of tens of thousands per hour. (That server did per-domain throttling for the major mail services, to avoid being nailed by the traffic monitors on those services.) At peak outgoing load, it still loafed.

On that outbound MX server, I configured a local caching DNS server. The key to success was to configure the size of the memory cache up and up and up. That limited the number of recursive look-ups that had to go off-system.

For the incoming MX (on a separate box) I did something similar, yet another local caching DNS server, to ease the DNS resolver traffic for PostFix, DNSBLs, and spam assassin.

The reason I don't recall the actual size of the CNS cache is that I "tuned" the size of each DNS cache until the amount of outbound query traffic was acceptable to me. Neither box had minimum hold times set up, so it didn't do all that much for domains with short (~300 seconds or less) hold times, but those were a small percentage of the look-ups that were cached.

N.B.: Before doing the smarthost consolidation, my main DNS servers were running at red-line.

Reply via email to