Hi Damjan,

I have few queries regarding this patch.


 - DPDK mempools are not used anymore, we register custom mempool ops, and dpdk 
is taking buffers from VPP

Some of the targets uses hardware memory allocator like OCTEONTx family and 
NXP's dpaa. Those hardware allocators are exposed as dpdk mempools. Now with 
this change I can see rte_mempool_populate_iova() is not anymore called. So 
what is your suggestion to support such hardware.

 - first 64-bytes of metadata are initialised on free, so buffer alloc is very 
fast
Is it fair to say if a mempool is created per worker core per sw_index 
(interface) then buffer template copy can be avoided even during free (It can 
be done only once at init time)

Thanks,
Nitin


________________________________
From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of Damjan Marion via 
Lists.Fd.Io <dmarion=me....@lists.fd.io>
Sent: Friday, January 25, 2019 10:38 PM
To: vpp-dev
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] RFC: buffer manager rework

External Email

I am very close to the finish line with buffer management rework patch, and 
would like to
ask people to take a look before it is merged.

https://gerrit.fd.io/r/16638

It significantly improves performance of buffer alloc free and introduces numa 
awareness.
On my skylake platinum 8180 system, with native AVF driver observed performance 
improvement is:

- single core, 2 threads, ipv4 base forwarding test, CPU running at 2.5GHz (TB 
off):

old code - dpdk buffer manager: 20.4 Mpps
old code - old native buffer manager: 19.4 Mpps
new code: 24.9 Mpps

With DPDK drivers performance stays same as DPDK is maintaining own internal 
buffer cache.
So major perf gain should be observed in native code like: vhost-user, memif, 
AVF, host stack.

user facing changes:
to change number of buffers:
  old startup.conf:
    dpdk { num-mbufs XXXX }
  new startup.conf:
    buffers { buffers-per-numa XXXX}

Internal changes:
 - free lists are deprecated
 - buffer metadata is always initialised.
 - first 64-bytes of metadata are initialised on free, so buffer alloc is very 
fast
 - DPDK mempools are not used anymore, we register custom mempool ops, and dpdk 
is taking buffers from VPP
 - to support such operation plugin can request external header space - in case 
of DPDK it stores rte_mbuf + rte_mempool_objhdr

I'm still running some tests so possible minor changes are possible, but 
nothing major expected.

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12016): https://lists.fd.io/g/vpp-dev/message/12016
Mute This Topic: https://lists.fd.io/mt/29539221/675748
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [nsax...@caviumnetworks.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12145): https://lists.fd.io/g/vpp-dev/message/12145
Mute This Topic: https://lists.fd.io/mt/29539221/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to