So, different tact today, namely the monitoring of '/proc/net/softnet_stat' to
try reduce potential errors on the interface.
End result: 517k qps.
Final changes for the day:
sysctl -w net.core.netdev_max_backlog=32768
sysctl -w net.core.netdev_budget=2700
/root/nic_balance.sh em1 0 2
netdev_max
Ugh, let me try that again (apologies if you got the half-composed version).
> The lab uses Dell R430s running Fedora Core 23 with Intel X710 10GB NICs
> and each populated with a single Xeon E5-2680 v3 2.5 GHz 12-core CPU.
R630 chassis I believe, same NIC's, smaller processor (E5-2650v4@2.2Ghz
It's been some years now, but I had worked on developing code for a high
throughput network server (not BIND). We found that on multi-socketed
NUMA machines we could have similar contention problems, and it was
quite important to make sure that threads which needed access to the
same memory areas w
On 01/06/2017 23:26, Mathew Ian Eis wrote:
> … and for one last really crazy idea, you could try running a pair of
> named instances on the machine and fronting them with nginx’s
> supposedly scalable UDP load balancer. (As long as you don’t get a
> performance hit, it also opens up other interest
On 02/06/2017 08:12, Browne, Stuart wrote:
> Query rate thus far reached (on 24 cores, numa node restricted): 426k qps
> Query rate thus far reached (on 48 cores, numa nodes unrestricted): 321k qps
In our internal Performance Lab I've achieved nearly 900 kqps on small
authoritative zones when we
On 02/06/17 08:12, Browne, Stuart wrote:
Just some interesting investigation results. One of the URL's Matthew
Ian Eis linked to talked about using a tool called 'perf'. For the
hell of it, I gave it a shot.
perf is super-powerful.
On a sufficiently recent kernel you can also do interesting th
Just some interesting investigation results. One of the URL's Matthew Ian Eis
linked to talked about using a tool called 'perf'. For the hell of it, I gave
it a shot.
Sure enough it tells some very interesting things.
When BIND was restricted to using a single NUMA node, the biggest call (to
_
> -Original Message-
> From: Plhu [mailto:p...@seznam.cz]
> a few simple ideas to your tests:
> - have you inspected the per-thread CPU? Aren't some of the threads
> overloaded?
I've tested both the auto-calculated values (one thread per available core) and
explicitly overridden this
> -Original Message-
> From: Mathew Ian Eis [mailto:mathew@nau.edu]
>
>
> Basically the math here is “large enough that you can queue up the
> 9X.XXXth percentile of traffic bursts without dropping them, but not so
> large that you waste processing time fiddling with the queue”. Sin
June 1, 2017 at 12:27 AM
To: Mathew Ian Eis , "bind-users@lists.isc.org"
Subject: RE: Tuning suggestions for high-core-count Linux servers
Cheers Matthew.
1) Not seeing that error, seeing this one instead:
01-Jun-2017 01:46:27.952 client: warning: client 192.168.0.23#
Hello Stuart,
a few simple ideas to your tests:
- have you inspected the per-thread CPU? Aren't some of the threads overloaded?
- have you tried to get the statistics from the Bind server using the
XML or JSON interface? It may bring you another insight to the errors.
- I may have missed the
athew Ian Eis [mailto:mathew@nau.edu]
Sent: Thursday, 1 June 2017 10:30 AM
To: bind-users@lists.isc.org
Cc: Browne, Stuart
Subject: [EXTERNAL] Re: Tuning suggestions for high-core-count Linux servers
360k qps is actually quite good… the best I have heard of until now on EL was
180k [1]. Th
-Original Message-
From: bind-users on behalf of "Browne,
Stuart"
Date: Wednesday, May 31, 2017 at 12:25 AM
To: "bind-users@lists.isc.org"
Subject: Tuning suggestions for high-core-count Linux servers
Hi,
I've been able to get my hands on some rat
Am 31.05.2017 um 14:42 schrieb MURTARI, JOHN:
Stuart, You didn't mention what OS you are using
Subject: RE: Tuning suggestions for high-core-count Linux servers
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
t regards!
John
--
Message: 4
Date: Wed, 31 May 2017 07:25:44 +
From: "Browne, Stuart"
To: "bind-users@lists.isc.org"
Subject: Tuning suggestions for high-core-count Linux servers
Message-ID:
<07ef8b18a5248a4691e86a8e16bdbd87013bd...@stntexmb11.cis
Hi,
I've been able to get my hands on some rather nice servers with 2 x 12 core
Intel CPU's and was wondering if anybody had any decent tuning tips to get BIND
to respond at a faster rate.
I'm seeing that pretty much cpu count beyond a single die doesn't get any real
improvement. I understand
16 matches
Mail list logo