Florin,

So the TCP stack does not connect to VPP using memif.
I’ll check the shared memory you mentioned.

For our transport stack we’re using memif. Nothing to
do with TCP though.

Iperf3 to VPP there must be copies anyway.
There must be some batching with timing though
while doing these copies.

Is there any doc of svm_fifo usage?

Thanks
Luca

On 7 May 2018, at 20:00, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:

Hi Luca,

I guess, as you did, that it’s vectorization. VPP is really good at pushing 
packets whereas Linux is good at using all hw optimizations.

The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given 
that you did the testing with iperf3, I suspect the edge is not there. That is, 
I guess they’re not abusing syscalls with lots of small writes. Moreover, the 
fifos are not zero-copy, apps do have to write to the fifo and vpp has to 
packetize that data.

Florin

On May 7, 2018, at 10:29 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Hi Florin

Thanks for the info.

So, how do you explain VPP TCP stack beats Linux
implementation by doubling the goodput?
Does it come from vectorization?
Any special memif optimization underneath?

Luca

On 7 May 2018, at 18:17, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:

Hi Luca,

We don’t yet support TSO because it requires support within all of vpp (think 
tunnels). Still, it’s on our list.

As for crypto offload, we do have support for IPSec offload with QAT cards and 
we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL 
engine also with QAT cards.

Regards,
Florin

On May 7, 2018, at 7:53 AM, Luca Muscariello 
<lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>> wrote:

Hi,

A few questions about the TCP stack and HW offloading.
Below is the experiment under test.

  +------------+                          +-----------+
  |      +-----+                 DPDK-10GE|           |
  |Iperf3| TCP |      +------------+      |TCP   Iperf3
  |      +------------+Nexus Switch+------+           +
  |LXC   | VPP||      +------------+      |VPP |LXC   |
  +------------+  DPDK-10GE               +-----------+


Using the Linux kernel w/ or w/o TSO I get an iperf3 goodput of 9.5Gbps or 
4.5Gbps.
Using VPP TCP stack I get 9.2Gbps, say max goodput as Linux w/ TSO.

Is there any TSO implementation already in VPP one can take advantage of?

Side question. Is there any crypto offloading service available in VPP?
Essentially  for the computation of RSA-1024/2048, EDCSA 192/256 signatures.

Thanks
Luca




Reply via email to