Performance speedup in incredible!
Le 25/01/2019 18:09, « [email protected] au nom de Damjan Marion via
Lists.Fd.Io » <[email protected] au nom de [email protected]> a
écrit :
I am very close to the finish line with buffer management rework patch, and
would like to
ask people to take a look before it is merged.
https://gerrit.fd.io/r/16638
It significantly improves performance of buffer alloc free and introduces
numa awareness.
On my skylake platinum 8180 system, with native AVF driver observed
performance improvement is:
- single core, 2 threads, ipv4 base forwarding test, CPU running at 2.5GHz
(TB off):
old code - dpdk buffer manager: 20.4 Mpps
old code - old native buffer manager: 19.4 Mpps
new code: 24.9 Mpps
With DPDK drivers performance stays same as DPDK is maintaining own
internal buffer cache.
So major perf gain should be observed in native code like: vhost-user,
memif, AVF, host stack.
user facing changes:
to change number of buffers:
old startup.conf:
dpdk { num-mbufs XXXX }
new startup.conf:
buffers { buffers-per-numa XXXX}
Internal changes:
- free lists are deprecated
- buffer metadata is always initialised.
- first 64-bytes of metadata are initialised on free, so buffer alloc is
very fast
- DPDK mempools are not used anymore, we register custom mempool ops, and
dpdk is taking buffers from VPP
- to support such operation plugin can request external header space - in
case of DPDK it stores rte_mbuf + rte_mempool_objhdr
I'm still running some tests so possible minor changes are possible, but
nothing major expected.
--
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12017): https://lists.fd.io/g/vpp-dev/message/12017
Mute This Topic: https://lists.fd.io/mt/29539221/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-