> On 3 Feb 2019, at 16:58, Nitin Saxena <nsax...@marvell.com> wrote: > > Hi Damjan, > > I have few queries regarding this patch. > > - DPDK mempools are not used anymore, we register custom mempool ops, and > dpdk is taking buffers from VPP > Some of the targets uses hardware memory allocator like OCTEONTx family and > NXP's dpaa. Those hardware allocators are exposed as dpdk mempools.
Which exact operation do they accelerate? > Now with this change I can see rte_mempool_populate_iova() is not anymore > called. Yes, but new code does pretty much the same thing, it populates both elt_list and mem_list. Also new code puts IOVA into mempool_objhdr. > So what is your suggestion to support such hardware. Before I can provide any suggestion I need to understand better what those hardware buffer managers do and why they are better than pure software solution we have today. > > > - first 64-bytes of metadata are initialised on free, so buffer alloc is > very fast > Is it fair to say if a mempool is created per worker core per sw_index > (interface) then buffer template copy can be avoided even during free (It can > be done only once at init time) The really expensive part of buffer free operation is bringing cacheline into L1, and we need to do that to verify reference count of the packet. At the moment when data is in L1, simply copying template will not cost much. 1-2 clocks on x86, not sure about arm but still i expect that it will result in 4 128-bit stores. That was the rationale for resetting the metadata during buffer free. So to answer your question, having buffer per sw-interface will likely improve performance a bit, but it will also cause sub-optimal use of buffer memory. Such solution will also have problem in scaling, for example if you have hundreds of virtual interfaces... > > Thanks, > Nitin > > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> <vpp-dev@lists.fd.io > <mailto:vpp-dev@lists.fd.io>> on behalf of Damjan Marion via Lists.Fd.Io > <dmarion=me....@lists.fd.io <mailto:dmarion=me....@lists.fd.io>> > Sent: Friday, January 25, 2019 10:38 PM > To: vpp-dev > Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > Subject: [vpp-dev] RFC: buffer manager rework > > External Email > > I am very close to the finish line with buffer management rework patch, and > would like to > ask people to take a look before it is merged. > > https://gerrit.fd.io/r/16638 <https://gerrit.fd.io/r/16638> > > It significantly improves performance of buffer alloc free and introduces > numa awareness. > On my skylake platinum 8180 system, with native AVF driver observed > performance improvement is: > > - single core, 2 threads, ipv4 base forwarding test, CPU running at 2.5GHz > (TB off): > > old code - dpdk buffer manager: 20.4 Mpps > old code - old native buffer manager: 19.4 Mpps > new code: 24.9 Mpps > > With DPDK drivers performance stays same as DPDK is maintaining own internal > buffer cache. > So major perf gain should be observed in native code like: vhost-user, memif, > AVF, host stack. > > user facing changes: > to change number of buffers: > old startup.conf: > dpdk { num-mbufs XXXX } > new startup.conf: > buffers { buffers-per-numa XXXX} > > Internal changes: > - free lists are deprecated > - buffer metadata is always initialised. > - first 64-bytes of metadata are initialised on free, so buffer alloc is > very fast > - DPDK mempools are not used anymore, we register custom mempool ops, and > dpdk is taking buffers from VPP > - to support such operation plugin can request external header space - in > case of DPDK it stores rte_mbuf + rte_mempool_objhdr > > I'm still running some tests so possible minor changes are possible, but > nothing major expected. > > -- > Damjan > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#12016): https://lists.fd.io/g/vpp-dev/message/12016 > <https://lists.fd.io/g/vpp-dev/message/12016> > Mute This Topic: https://lists.fd.io/mt/29539221/675748 > <https://lists.fd.io/mt/29539221/675748> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub > <https://lists.fd.io/g/vpp-dev/unsub> [nsax...@caviumnetworks.com > <mailto:nsax...@caviumnetworks.com>] > -=-=-=-=-=-=-=-=-=-=-=- -- Damjan
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#12144): https://lists.fd.io/g/vpp-dev/message/12144 Mute This Topic: https://lists.fd.io/mt/29539221/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-