The parameter that is glaringly missing from your list is “recursive-clients”. Do you have that set at default value (1000) or have you bumped it up higher? Since you say that this happens at “peak hours”, recursive-clients is the prime suspect, since it governs how many *simultaneous* recursive requests can be handled. Note that you should see some indication in your logs if you’re running into a recursive-clients limit. Also, you can see the current number of recursive clients, in real time, via the “rndc status” display. Check whether you’re at or close to your limit.
If that doesn’t pan out, I’d also look at this from the networking standpoint. The servers that are experiencing high response times, are they on congested links? High-latency links? Was there some sort of anomalies occurring at the time of the high response times, e.g. high packet loss? Ultimately, you might have to mimic resolution of some of the “slow” queries using a command-line tool like “dig” (with recursion turned off, possibly the +trace option is useful here) and/or take packet captures to identify the source of the slowness. It’s quite possible that your “peak hours” are peak hours for your Internet providers as well, so the network characteristics of your connections may be less than acceptable at those times. - Kevin From: bind-users-boun...@lists.isc.org [mailto:bind-users-boun...@lists.isc.org] On Behalf Of alaa m zidan Sent: Monday, January 26, 2015 3:26 PM To: bind-users@lists.isc.org Subject: BIND response time is relatively high hi , I noticed that at peak hours, BIND response time is relatively high for some servers. non-cached query takes over 700ms I set some kernel parameters to tune the network and sockets for redhat 6 and set some global options to tune the BIND by modifying the cache settings, but neither I get the cache I set the limit to filed up, nor I get better performance. some settings are as below, kernel sysctl: ============ net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_max_orphans = 400000 net.core.somaxconn = 4096 net.ipv4.tcp_rmem = 4096 87380 33554432 net.ipv4.tcp_wmem = 4096 65536 33554432 bind named.conf global options: ========================= cleaning-interval 1440; max-cache-ttl 2419200; max-ncache-ttl 86400; max-cache-size 5120m; server specs: =========== memory is 8GB memory usage is not exceeding 20% or 1.7GB while the cache is limited to 5GB as shown in the settings above, i wouldn't be more happier if i could have seen memory utilization spikes to the sky. could you please suggest !!! thanks
_______________________________________________ Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users