Le 2012-10-26 06:48, Martin Pelikan a écrit :
2012/10/25 Michel Blais <mic...@targointernet.com>:
Hi,
I'm trying to make unbound have less timeout query (I see around1 to 2%
of query timeout using DNS performance test from Silverwolf Software
and was looking at "Unbound : Howto optimise"
and wanted to try the so-rcvbuf option but enabling it cause a error on
service start.
Hi,
I just set up an unbound resolver too, so thanks for the hint :-)
Are you sure this particular benchmark simulates your real load (at
peaks, plus some)? Our resolver does 10 queries per second so far
without any impact I'm able to measure. We use Cacti for monitoring,
the very templates that come bundled with unbound, and they also show
you an estimate on what response times your clients have been having
(and how many of them).
I use DNS performance test and it give me some answer timeout with our
unbound server. I thinked maybe it was a bug with the test software so I
also tryed it on both google server, OpenDSN and Videotron (one of our
fiber optic provider) server and for google, I had one lost for 500 DNS
request, none for OpenDNS and Videotron for around the same amount of
request. If I test my unbound server, I will see around 5 to 8 request
timeout for the same amount of requests so yes, I'm pretty sure the're a
performance problem with the resolver.
Is this a good idea to change this value ? Any reason why it limited
like this ?
My guess would be too much excess data present in buffers will hurt
your latency, you have to wait until these 2 MB are transferred before
the data you are sending _right now_ will get on the wire. Playing
with latency is not the kind of thing your customers want on a DNS
server. If you really see that much DNS traffic your box can't handle
it and has to drop queries because of that, please post roughly how
much is it.
Theo answer out of mailling list :
Prepare for kernel deadlocks due to being out of memory.
Is there any other way to change max chars in buffer size than recompile
the kernel.
It's a constant that will be scattered across your kernel binary after
it's been compiled. No, it can't.
access-control: 0.0.0.0/0 allow_snoop
Be sure to change this, or at least firewall your server :-)
Behind PF :-) but we must leave the DNS resovler allowed to everyone.
Since we are a small ISP, we also receive reverse DNS query that the
unbound server will answer instead of NSD. I could have use 2 differents
unit, one for unbound + one for NSD but with CARP for high avaibility
and since carp and virtualisation don't work from what I readed, it
would mean use 4 diffrents unit. So instead, we added NSD on the same
box that listen to a other port and use stub-zone so unbound query NSD
for our address reverse DNS.
cache-min-ttl: 3600
cache-max-ttl: 86400
prefetch: yes
num-threads: 4
outgoing-range: 8192
num-queries-per-thread: 2048
msg-cache-slabs: 8
rrset-cache-slabs: 8
infra-cache-slabs: 8
key-cache-slabs: 8
rrset-cache-size: 256m
msg-cache-size: 128m
Why are you twiddling these? Did you see any errors/warnings using
your default setup?
No I didn't get error from default value. I just use the config from a
other test server.
From unbound howto optimse :
number of thread = 4 (2 core + hyperthreading in my case).
For outgoing range, howto optimise have the 8096 value, I tryed this one
but I will try the OpenBSD unbound.conf value at 4096 instead with
number of query by thread at 1024.
The 4 cache slabs should be 2 time higher than number of thread (from
Unbound Howto optimise doc).
For the cache size, it come from freebsd unbound.conf sample. We add a
unuse freebsd that we first tryed unbound on but since we had lot of
error in log for unknow reason unbound dev was not able to reporduce it,
I tryed it on OpenBSD instead since I know I read some are using it on
the mailling list.
Sample :
The test box has 4gig of ram so 256meg for rrset allows a lot of room
for cacheed objects.
Since we also have 4 Go of RAM and that it will only be use for DNS, we
tryed this value.
forward-zone:
name: "."
forward-addr: 8.8.8.8 # Google Public DNS
forward-addr: 8.8.4.4 # Google Public DNS
If you have a big network, I would guess you're better off doing the
lookups by yourself. The latency of these haven't been the best (from
what I observed, might be biased of course).
They're great for admins in need, but I wouldn't use it in production.
I will try this then. I thinked that those add better latency than root
server but I will try with root server instead.
Thanks
Michel