Unfortunately no. However we did suffer from the packet drop issue due to queue 
full, when system load is not heavy. In the end we discovered some vectors in 
the queue only have very few buffers in them. And if we increased the queue 
size, drop rate goes down.

We have a dpdk ring based mechanism which will handoff gtpu packets from 
ip-gtpu-bypass to gtpu-input, there are two new nodes created for this purpose: 
handoff-node and handoff-input node. A very preliminary measurement shows these 
two nodes take around < 30 clocks in total when handoff only happens within one 
worker thread. Next, we’ll try to measure the overhead of handoff between 
workers. I’m expecting we’ll have significant performance loss due to cache 
misses. Anyway, the good side is, the code is much easier to understand and 
maintain.


See below for ‘show run’ output:

VirtualFunctionEthernet81/10/7   active              41054        10509824      
         0          1.37e1          256.00
VirtualFunctionEthernet81/10/7   active              41054        10509824      
         0          7.15e1          256.00
dpdk-input                       polling             41054               0      
         0          1.83e2            0.00
ip4-input                        active              41054        10509824      
         0          3.61e1          256.00
ip4-lookup                       active              41054        10509824      
         0          2.47e1          256.00
ip4-ppf-gtpu-bypass              active              41054        10509824      
         0          2.93e1          256.00
ip4-rewrite                      active              41054        10509824      
         0          2.50e1          256.00
pg-input                         polling             41054        10509824      
         0          7.65e1          256.00
ppf-gtpu4-encap                  active              41054        10509824      
         0          2.91e1          256.00
ppf-gtpu4-input                  active              41054        10509824      
         0          2.79e1          256.00
ppf-handoff-input                polling             41054        10509824      
         0          1.39e1          256.00
ppf-handoff                      active              41054        10509824      
         0          1.17e1          256.00
ppf-pdcp-encap                   active              41054        10509824      
         0          2.69e1          256.00
ppf-pdcp-encrypt                 active              41054        10509824      
         0          1.69e1          256.00
ppf-sb-path-lb                   active              41054        10509824      
         0          1.19e1          256.00
ppf-sdap-encap                   active              41054        10509824      
         0          2.64e1          256.00



From: Damjan Marion <dmar...@me.com>
Sent: Tuesday, December 18, 2018 5:18 PM
To: Kingwel Xie <kingwel....@ericsson.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: vPP handoff discussion


Possibly, do you have any numbers to support your statement?

--
Damjan


On 18 Dec 2018, at 10:14, Kingwel Xie 
<kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>> wrote:

Hi Damjan,

My fault that I should have made it clear. What I want to say is that I wonder 
if the existing handoff mechanism needs some improvement. Using a ring seems to 
be simpler, and better from performance perspective. Even more, I think it 
could help with the packet drop issue due to bigger and more flexible ring size.

Sorry I changed the subject, it doesn’t strictly follow the original one any 
more.

Regards,
Kingwel

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion 
via Lists.Fd.Io
Sent: Tuesday, December 18, 2018 3:12 PM
To: Kingwel Xie <kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] VPP Review: https://gerrit.fd.io/r/#/c/15084/:


Dear Kingwei,

I don't think VPP handoff is right solution for this problem. It can be solved 
in much simpler way.
We can simply have simple ring by worker tthread where new packets pending 
encryption/decryption are enqueued.
Then we can have input node which runs on all threads and  polls those rings. 
When there is new packet on the ring
that input node simply uses atomic compare and swap to declare that it is 
responsible for enc/dec of specific packet.
when encryption is completed, owning thread enqueues packets to the next node 
in preserved packet order...

Does this make sense?

--
Damjan



On 18 Dec 2018, at 03:22, Kingwel Xie 
<kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>> wrote:

Hi Damjan,

Yes, agree with you.

Here I got a thought about handoff mechanism in vPP. If looking into the DPDK 
crypto scheduler, you will find out that it heavily depends on DPDK rings, for 
buffer delivery among CPU cores and even for the packet reordering. Therefore, 
something comes to my mind, why can’t we use a ring for handoff?

First, as you know, the existing handoff is somewhat limited – the queue size 
is 32 by default, very little, and each queue item is a vector with up to 256 
buffer indices, but each vector might only have very few buffers when system is 
not so high. It is not efficient as I can see, and system might drop packets 
due to queue full.

Second, I think the technique used in vlib_get_frame_queue_elt might be slower 
or less efficient than compare-swap in dpdk ring.

Even more, this 2-dimension data structure also brings up complexity when it 
comes to coding. F.g., handoff-dispatch needs to consolidate buffers into a 
size 128 vector.

In general, I’d believe a ring-like mechanism probably makes handoff easier. I 
understand the ring requires compare-swap instruction which definitely 
introduces performance penalty, but on the other hand, handoff itself always 
introduces massive data cache misses, even worse than compare-swap. However, 
handoff  is always worthwhile in some case even there is penalty.

Appreciate you can share your opinion.

Regards,
Kingwel





From: Damjan Marion <dmar...@me.com<mailto:dmar...@me.com>>
Sent: Tuesday, December 18, 2018 1:03 AM
To: Kingwel Xie <kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>>
Cc: Gonsalves, Avinash (Nokia - IN/Bangalore) 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] VPP Review: https://gerrit.fd.io/r/#/c/15084/:


Hi Kingwei,

I agree this is useful feature, that's why i believe it should be implemented 
as native code instead of relying on external implementation which is from our 
perspective sub-optimal
due to dpdk dependency, time spent on buffer metadata conversion, etc..

--
Damjan




On 17 Dec 2018, at 15:19, Kingwel Xie 
<kingwel....@ericsson.com<mailto:kingwel....@ericsson.com>> wrote:

Hi Avinash,

I happened to look at the patch recently. To my understanding, it is valuable, 
cool stuff, as it allows offloading crypto to other cpu cores. Therefore, more 
throughput can be archieved. A question, you patched the dpdk ring to mp and 
mc, why not mp and sc?

Hi Damjan,

I guess the native ipsec mb plugin doesnot support offloading? Or maybe we can 
do a handoff, but anyhow we can not handoff one ipsec session to multi cores. 
Am I right?

Regards,
Kingwel

-------- 原始邮件 --------
主题: Re: [vpp-dev] VPP Review: https://gerrit.fd.io/r/#/c/15084/:
来自: "Damjan Marion via Lists.Fd.Io" 
<dmarion=me....@lists.fd.io<mailto:dmarion=me....@lists.fd.io>>
发至: 2018年12月17日 下午4:45
抄送: "Gonsalves, Avinash (Nokia - IN/Bangalore)" 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>>

Dear Avinash,

First, please use public mailing list for such requests, instead of unicasting 
people.

Regarding your patch, I don't feel comfortable to code review it, as I'm not 
familiar with dpdk crypto scheduler.

Personally, I believe such things should be implemented as native VPP code 
instead. We are already in process of moving from
DPDK AES-NI into native code (still dependant on ipsec MB lib) so this stuff 
will not be much usable in this form anyway.

But this is just my opinion, will leave it to others...

--
Damjan




On 13 Dec 2018, at 05:52, Gonsalves, Avinash (Nokia - IN/Bangalore) 
<avinash.gonsal...@nokia.com<mailto:avinash.gonsal...@nokia.com>> wrote:

Hi Dave, Damjan,

This was verified earlier, but didn’t get integrated. Could you please have a 
look?

Thanks,
Avinash

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11635): https://lists.fd.io/g/vpp-dev/message/11635
Mute This Topic: https://lists.fd.io/mt/28779969/675642
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com<mailto:dmar...@me.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11640): https://lists.fd.io/g/vpp-dev/message/11640
Mute This Topic: https://lists.fd.io/mt/28779969/675642
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com<mailto:dmar...@me.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11674): https://lists.fd.io/g/vpp-dev/message/11674
Mute This Topic: https://lists.fd.io/mt/28790716/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to