cpu affinity? If you have network card with many tx/rx threads (modern
PCI-E card can use MSI-X and 'software irq'), you can bind different
card threads/irqs to cores and ntpd process to other core. On BSD we use
cpuset to spread and bing threads to cores.
On Linux see script set_irq_affinity.sh**from Intel drivers
(https://gist.github.com/SaveTheRbtz/8875474)
<https://gist.github.com/SaveTheRbtz/8875474>and others in in drivers
https://sourceforge.net/projects/e1000/files/
Also you can google articles like 'linux router performance', for
example https://github.com/strizhechenko/netutils-linux (maybe also
rss-ladder tools can help ) or
https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/
Network stack tuning not simple. Performance is need good NIC
multithread chip and good driver. As I know, Intel NIC chipsets and
drivers really the best here.
--
Mike Yurlov
13.01.2020 11:54, Hal Murray пишет:
Thanks.
and without 'limited' on ~5kpps I have 8-10% CPU regardless minitoring
enabled/disabled. About 1% on 1000pps.
Is that within reason or worth investigating? 1% times 5 should be 5% rather
than 8-10% but there may not be enough significant digits in any of the
numbers.
For those who want to process hundreds of thousands of requests per second
(like 'national standard' servers) you can use multithreading and multiply
power of server.
The current code isn't setup for threads. I think with a bit of work, we
could get multiple threads on the server side.
On an Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
I can get 330K packets per second.
258K with AES CMAC.
I don't have NTS numbers yet.
_______________________________________________
devel mailing list
devel@ntpsec.org
http://lists.ntpsec.org/mailman/listinfo/devel