Thanks for your explanation, Damjan. Based on your words, it seems 
inappropriate to apply mbuf-fast-free on VPP, even for SMP systems …

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Damjan Marion via 
lists.fd.io
Sent: Thursday, September 23, 2021 12:18 AM
To: Jieqiang Wang <jieqiang.w...@arm.com>
Cc: Benoit Ganne (bganne) <bga...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>; 
Lijian Zhang <lijian.zh...@arm.com>; Honnappa Nagarahalli 
<honnappa.nagaraha...@arm.com>; Govindarajan Mohandoss 
<govindarajan.mohand...@arm.com>; Ruifeng Wang <ruifeng.w...@arm.com>; Tianyu 
Li <tianyu...@arm.com>; Feifei Wang <feifei.wa...@arm.com>; nd <n...@arm.com>
Subject: Re: [vpp-dev] Enable DPDK tx offload flag mbuf-fast-free on VPP vector 
mode


—
Damjan




On 22.09.2021., at 11:50, Jieqiang Wang 
<jieqiang.w...@arm.com<mailto:jieqiang.w...@arm.com>> wrote:

Hi Ben,

Thanks for your quick feedback. A few comments inline.

Best Regards,
Jieqiang Wang

-----Original Message-----
From: Benoit Ganne (bganne) <bga...@cisco.com<mailto:bga...@cisco.com>>
Sent: Friday, September 17, 2021 3:34 PM
To: Jieqiang Wang <jieqiang.w...@arm.com<mailto:jieqiang.w...@arm.com>>; 
vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: Lijian Zhang <lijian.zh...@arm.com<mailto:lijian.zh...@arm.com>>; Honnappa 
Nagarahalli 
<honnappa.nagaraha...@arm.com<mailto:honnappa.nagaraha...@arm.com>>; 
Govindarajan Mohandoss 
<govindarajan.mohand...@arm.com<mailto:govindarajan.mohand...@arm.com>>; 
Ruifeng Wang <ruifeng.w...@arm.com<mailto:ruifeng.w...@arm.com>>; Tianyu Li 
<tianyu...@arm.com<mailto:tianyu...@arm.com>>; Feifei Wang 
<feifei.wa...@arm.com<mailto:feifei.wa...@arm.com>>; nd 
<n...@arm.com<mailto:n...@arm.com>>
Subject: RE: Enable DPDK tx offload flag mbuf-fast-free on VPP vector mode

Hi Jieqiang,

This looks like an interesting optimization but you need to check that the 
'mbufs to be freed should be coming from the same mempool' rule holds true. 
This won't be the case on NUMA systems (VPP creates 1 buffer pool per NUMA).
This should be easy to check with eg. 'vec_len (vm->buffer_main->buffer_pools) 
== 1'.

Jieqiang: That's a really good point here. Like you said, it holds true on SMP 
systems and we can check by if the numbers of buffer pool equal to 1. But I am 
wondering that is this check too strict? If the worker CPUs and NICs used 
reside in the same NUMA node, I think mbufs come from the same mempool and we 
still meet the requirement here.  What do you think?

Please note that VPP is not using DPDK mempools. We are faking them by 
registering our own mempool handlers.
There is special trick how refcnt > 1 is handled. All packets which have vpp 
ref count > 1 are sent to DPDK code as members of another fake mempool which 
have cache turned off.
In reality that means that DPDK will have 2 fake mempools per numa, and all 
packets going to DPDK code will always have set refcnt to 1.



For the rest, I think we do not use DPDK mbuf refcounting at all as we maintain 
our own anyway, but someone more knowledgeable than me should confirm.

Jieqiang: This echoes with the experiments(IPv4 multicasting and L2 flood) I 
have done. All the mbufs in the two test cases are copied instead of ref 
counting. But this also needs double-check from VPP experts like you mentioned.

see above….



I'd be curious to see if we can measure a real performance difference in CSIT.

Jieqiang: Let me trigger some performance test cases in CSIT and come back to 
you with related performance figures.

—
Damjan
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20188): https://lists.fd.io/g/vpp-dev/message/20188
Mute This Topic: https://lists.fd.io/mt/85669132/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to