On 2021-03-02 16:50, Pedro David Marco wrote:
Tried both and with/without cache...
On 02.03.21 18:26, Benny Pedersen wrote:
i think its a glibc problem, and if it is it could be solved with
edns0 in local dns
force tcp on packet size over 512 byte
better not. excessive use of TCP can be a
On 02.03.21 15:50, Pedro David Marco wrote:
Tried both and with/without cache...
disabling cache will make problem worse.
However, the question was a bit different - if you run your DNS server
locally.
But it should not be forwarding for spam detection.
On 02.03.21 15:26, Pedro David Marco
On 2021-03-02 16:50, Pedro David Marco wrote:
Tried both and with/without cache...
i think its a glibc problem, and if it is it could be solved with edns0
in local dns
force tcp on packet size over 512 byte
https://bobcares.com/blog/bind-edns/ default edns0 is now 4096, but
sometimes its c
I have set buffers to 20MB per core and results are great:
# sysctl -w net.core.rmem_default=20971520
0% packet lost... with default value of 200KB packet-loss went easily above 30%
You can chek if you have this problem with:
# netstat -suna
look for errors in UDP area
--Pedteter.
Tried both and with/without cache...
Pedreter...
On Tuesday, March 2, 2021, 04:46:08 PM GMT+1, Matus UHLAR - fantomas
wrote:
On 02.03.21 15:26, Pedro David Marco wrote:
>Just in case someone has this issue...
>Short version:
>In heavy load environments, SA produces more UDP
On 02.03.21 15:26, Pedro David Marco wrote:
Just in case someone has this issue...
Short version:
In heavy load environments, SA produces more UDP traffic (specially if answers
are big, typically happens with TXT queries) than Linux kernel can handlewith
default buffers (tested in Debian Bust
On 2021-03-02 16:26, Pedro David Marco wrote:
Correct Kernel UD tunning solves the problem!
in verbose this is ?
SOLVED!
Just in case someone has this issue...
Short version:
In heavy load environments, SA produces more UDP traffic (specially if answers
are big, typically happens with TXT queries) than Linux kernel can handlewith
default buffers (tested in Debian Buster), so many SA queries never get an
Hi all,
When there are several hundreds of lookups, Askdns / Async abort many of them
randomly even when 100% of queries got an answer.I use local dns cache but
every run of SA produces different number of aborted remaining lookups.
If you dig manually from command line any aborted query, answer