Re: [vpp-dev] route lookup api

2020-02-19 Thread Christian Hopps

> On Feb 19, 2020, at 2:02 AM, Neale Ranns via Lists.Fd.Io 
>  wrote:
> 
>
> Hi Chris,
>
> Adding an API to dump a single route would be a welcome addition to the API.

Ok, I'll do that then.

For tables of things (at least in ip.api) I'm noticing a pattern of

  xADD, xDEL, xDUMP

I think the API would benefit from including LOOKUP to that pattern:

  xADD, xDEL, xDUMP, xLOOKUP

The code should lend itself to factorization as doing the ADD/DEL is generally 
going to require doing a lookup to find the entry.

Thanks,
Chris.

>
> /neale
>
> From:  on behalf of Paul Vinciguerra 
> 
> Date: Wednesday 19 February 2020 at 04:21
> To: Christian Hopps 
> Cc: vpp-dev 
> Subject: Re: [vpp-dev] route lookup api
>
> Those don't seem to be exposed via the api dump/details.
>
> The pattern that is commonly used in the test framework is to call 
> vapi.cli_inline(cmd="show ip fib 1.2.3.4/24")
>
> On Tue, Feb 18, 2020 at 5:53 PM Christian Hopps  wrote:
>> In the CLI there's an option to lookup route for a given IP. Is there a 
>> similar interface in the binary API?
>> 
>> The code I'm looking at now is doing an entire fib dump to look for this 
>> route, which seems problematic as all the logic VPP might us for what entry 
>> to select has to be attempted to be replicated in the client getting this 
>> dump. So I'm hoping theres a better way. :)
>> 
>> Thanks,
>> Chris.
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15462): https://lists.fd.io/g/vpp-dev/message/15462
Mute This Topic: https://lists.fd.io/mt/71382541/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-02-19 14:00:23 UTC

2020-02-19 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 1
Newly detected: 0
Eliminated: 1
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15463): https://lists.fd.io/g/vpp-dev/message/15463
Mute This Topic: https://lists.fd.io/mt/71396227/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FDIO Maintenance - 2020-02-20 1900 UTC to 2400 UTC

2020-02-19 Thread Vanessa Valderrama
*What: *Standard updates and upgrade

  * Jenkins
  o OS and security updates
  o Upgrade
  o Plugin updates
  * Nexus
  o OS updates
  * Jira
  o OS updates
  * Gerrit
  o OS updates
  * Sonar
  o OS updates
  * OpenGrok
  o OS updates

*When:  *2020-02-20 1900 UTC to 2400 UTC

*Impact:*

Maintenance will require a reboot of each FD.io system. Jenkins will be
placed in shutdown mode at 1800 UTC. Please let us know if specific jobs
cannot be aborted.
The following systems will be unavailable during the maintenance window:

  *     Jenkins sandbox
  *     Jenkins production
  *     Nexus
  *     Jira
  *     Gerrit
  *     Sonar
  *     OpenGrok


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15464): https://lists.fd.io/g/vpp-dev/message/15464
Mute This Topic: https://lists.fd.io/mt/71215742/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hello,

I've been trying to do some basic performance testing on 20.01 using the 
vpp_echo application, and while I'm getting the expected performance with TCP, 
I'm not quite able to achieve what I would expect with UDP. The NICs are 10G 
X520, and on TCP I get around 9.5 Gbps, but with UDP I get about 6.5 Gbps with 
about 30% packet loss.

The commands I use are:
*Server* : ./vpp_echo socket-name /tmp/vpp-api.sock uri udp://10.0.0.71/ 
fifo-size 100 uni RX=50Gb TX=0 stats 1 sclose=Y rx-buf 1400 tx-buf 0 
mq-size 10
*Client* : ./vpp_echo socket-name /tmp/vpp-api.sock client uri 
udp://10.0.0.71/ fifo-size 100 uni TX=50Gb RX=0 stats 1 sclose=Y tx-buf 
1400 rx-buf 0

(For TCP tests the commands are pretty much the same, except for the URI which 
is tcp://...)

I have a couple of hints but not sure how to make the necessary tweaks to 
improve performance. On the receiver side, *vpp# sh hardware-interfaces* shows:
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
macsec-strip vlan-filter vlan-extend jumbo-frame scatter
security keep-crc
rx offload active: ipv4-cksum jumbo-frame scatter

I'm thinking that udp-cksum not being active is an issue, is this something 
that I need to explicitly enable somehow? I do have the following in 
startup.conf:
dpdk {
dev :05:00.0{
num-rx-desc 1024
num-tx-desc 1024
tso on
}
enable-tcp-udp-checksum
}

My other clue is this (again on the receiver side):
vpp# sh interface
Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          
Count
TenGigabitEthernet5/0/0           1      up          9000/0/0/0     rx packets  
            25107326
rx bytes             36136837440
tx packets                     1
tx bytes                      60
drops                         44
ip4                     25107281
rx-miss                 11599259

Any tips on what might be causing the rx-miss, or things I should tune to 
improve this for UDP?

Thank you!
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15465): https://lists.fd.io/g/vpp-dev/message/15465
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

UDP is without any flow/congestion control. That is, there is nothing to push 
back on the sender when it over drives the receiver. Increasing the number of 
rx descriptors probably helps a bit but unless the rx nic is faster, I don’t 
know if there’s anything else that could avoid the drops.

I’m saying that because one connection should be able to do more than 10Gbps. 
But to be sure, does “sh session verbose” indicate that your rx fifo is full?

Regards,
Florin

> On Feb 19, 2020, at 9:30 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hello,
> 
> I've been trying to do some basic performance testing on 20.01 using the 
> vpp_echo application, and while I'm getting the expected performance with 
> TCP, I'm not quite able to achieve what I would expect with UDP. The NICs are 
> 10G X520, and on TCP I get around 9.5 Gbps, but with UDP I get about 6.5 Gbps 
> with about 30% packet loss.
> 
> The commands I use are:
> Server: ./vpp_echo socket-name /tmp/vpp-api.sock uri udp://10.0.0.71/ 
> fifo-size 100 uni RX=50Gb TX=0 stats 1 sclose=Y rx-buf 1400 tx-buf 0 
> mq-size 10
> Client: ./vpp_echo socket-name /tmp/vpp-api.sock client uri 
> udp://10.0.0.71/ fifo-size 100 uni TX=50Gb RX=0 stats 1 sclose=Y 
> tx-buf 1400 rx-buf 0
> 
> (For TCP tests the commands are pretty much the same, except for the URI 
> which is tcp://...)
> 
> I have a couple of hints but not sure how to make the necessary tweaks to 
> improve performance. On the receiver side, vpp# sh hardware-interfaces shows:
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend jumbo-frame 
> scatter
>security keep-crc
> rx offload active: ipv4-cksum jumbo-frame scatter
> 
> I'm thinking that udp-cksum not being active is an issue, is this something 
> that I need to explicitly enable somehow? I do have the following in 
> startup.conf:
> dpdk {
>   dev :05:00.0{
> num-rx-desc 1024
> num-tx-desc 1024
> tso on
>   }
>   enable-tcp-udp-checksum
> }
> 
> My other clue is this (again on the receiver side):
> vpp# sh interface
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> TenGigabitEthernet5/0/0   1  up  9000/0/0/0 rx 
> packets  25107326
> rx bytes  
>36136837440
> tx 
> packets 1
> tx bytes  
> 60
> drops 
> 44
> ip4   
>   25107281
> rx-miss   
>   11599259
> 
> Any tips on what might be causing the rx-miss, or things I should tune to 
> improve this for UDP?
> 
> Thank you!
> Dom 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15466): https://lists.fd.io/g/vpp-dev/message/15466
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin,

Thanks for the response. I'm not so concerned about the packet drops (as you 
point out it is to be expected), however increasing the number of rx 
descriptors did help a lot, so thank you very much for that!

I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
*Client (TX) side:*
Connection                                        State          Rx-f Tx-f
[#1][U] 10.0.0.70:11202->10.0.0.71:           -              0 85
Thread 1: active sessions 1

*Server (RX) side:*
vpp# sh session verbose
Connection                                        State Rx-f Tx-f
[#0][U] 10.0.0.71:->0.0.0.0:0                 - 0 0

Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
trying to debug why it's not in the active tx offloads even though it shows as 
available (and it is in the active tx offloads).

Thanks,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15467): https://lists.fd.io/g/vpp-dev/message/15467
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Don, 

Inline.

> On Feb 19, 2020, at 11:54 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi Florin,
> 
> Thanks for the response. I'm not so concerned about the packet drops (as you 
> point out it is to be expected), however increasing the number of rx 
> descriptors did help a lot, so thank you very much for that!

FC: Great!

> 
> I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
> Client (TX) side:
> ConnectionState  Rx-f  
> Tx-f
> [#1][U] 10.0.0.70:11202->10.0.0.71:   -  0 
> 85
> Thread 1: active sessions 1
> 
> Server (RX) side:
> vpp# sh session verbose
> ConnectionState  Rx-f  
> Tx-f
> [#0][U] 10.0.0.71:->0.0.0.0:0 -  0 0

FC: So the app reads the data as fast as it’s enqueued. That’s good because it 
limits the problem to how fast vpp can consume udp packets. 
> 
> Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
> trying to debug why it's not in the active tx offloads even though it shows 
> as available (and it is in the active tx offloads).

FC: Ow, I missed that. What interfaces are you using? 

Regards,
Florin

> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15468): https://lists.fd.io/g/vpp-dev/message/15468
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin,

Same NIC on both machines:

root# dpdk-devbind --status
Network devices using DPDK-compatible driver

:05:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=uio_pci_generic 
unused=ixgbe
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15469): https://lists.fd.io/g/vpp-dev/message/15469
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
[Edited Message Follows]

** Edit **: Corrected typo, udp-cksum not active in rx-offloads, but is active 
in tx-offloads

Hi Florin,

Thanks for the response. I'm not so concerned about the packet drops (as you 
point out it is to be expected), however increasing the number of rx 
descriptors did help a lot, so thank you very much for that!

I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
*Client (TX) side:*
Connection                                        State          Rx-f Tx-f
[#1][U] 10.0.0.70:11202->10.0.0.71:           -              0 85
Thread 1: active sessions 1

*Server (RX) side:*
vpp# sh session verbose
Connection                                        State Rx-f Tx-f
[#0][U] 10.0.0.71:->0.0.0.0:0                 - 0 0

Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
trying to debug why it's not in the *active rx offloads* even though it shows 
as available (and it is in the active tx offloads).

Thanks,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15467): https://lists.fd.io/g/vpp-dev/message/15467
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

Now that you mention it, it’s the same for my nics. Nonetheless, the packets 
that reach ip4-local are marked as having a valid l4 checksum. Check in 
ip4_local_check_l4_csum_x2 and ip4_local_check_l4_csum if 
ip4_local_l4_csum_validate is called or not. If not, there’s no extra overhead 
in processing the udp packets. 

Regards,
Florin

> On Feb 19, 2020, at 11:54 AM, dchons via Lists.Fd.Io 
>  wrote:
> 
> [Edited Message Follows]
> 
> ** Edit **: Corrected typo, udp-cksum not active in rx-offloads, but is 
> active in tx-offloads
> 
> Hi Florin,
> 
> Thanks for the response. I'm not so concerned about the packet drops (as you 
> point out it is to be expected), however increasing the number of rx 
> descriptors did help a lot, so thank you very much for that!
> 
> I'm still at around 6.5 Gbps, "sh session verbose" shows the following:
> Client (TX) side:
> ConnectionState  Rx-f  
> Tx-f
> [#1][U] 10.0.0.70:11202->10.0.0.71:   -  0 
> 85
> Thread 1: active sessions 1
> 
> Server (RX) side:
> vpp# sh session verbose
> ConnectionState  Rx-f  
> Tx-f
> [#0][U] 10.0.0.71:->0.0.0.0:0 -  0 0
> 
> Any thoughts on udp-cksum not being enabled? I'm debating whether it's worth 
> trying to debug why it's not in the active rx offloads even though it shows 
> as available (and it is in the active tx offloads).
> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15470): https://lists.fd.io/g/vpp-dev/message/15470
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi Florin,

Thanks for the suggestion. It looks like in my case 
*ip4_local_l4_csum_validate* is being called:

Breakpoint 1, ip4_local_l4_csum_validate (vm=0x7fffb4ecef40, p=0x10026d9980, 
ip=0x10026d9a8e, is_udp=1 '\001',
error=0x7fffb517b1d8 "\016\023", good_tcp_udp=0x7fffb517b19d 
"\177\001\001@\274\027\265\377\177")
at /root/vpp.20.01/src/vnet/ip/ip4_forward.c:1376
1376      flags0 = ip4_tcp_udp_validate_checksum (vm, p);
Missing separate debuginfos, use: debuginfo-install 
keyutils-libs-1.5.8-3.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 
libselinux-2.5-14.1.el7.x86_64 libuuid-2.23.2-61.el7_7.1.x86_64 
numactl-libs-2.0.12-3.el7_7.1.x86_64 pcre-8.32-17.el7.x86_64
(gdb) bt
#0 *ip4_local_l4_csum_validate* (vm=0x7fffb4ecef40, p=0x10026d9980, 
ip=0x10026d9a8e, is_udp=1 '\001',
error=0x7fffb517b1d8 "\016\023", good_tcp_udp=0x7fffb517b19d 
"\177\001\001@\274\027\265\377\177")
at /root/vpp.20.01/src/vnet/ip/ip4_forward.c:1376
#1  0x76a126eb in *ip4_local_check_l4_csum* (vm=0x7fffb4ecef40, 
b=0x10026d9980, ih=0x10026d9a8e,
error=0x7fffb517b1d8 "\016\023") at 
/root/vpp.20.01/src/vnet/ip/ip4_forward.c:1416
#2  0x76a1379a in ip4_local_inline (vm=0x7fffb4ecef40, 
node=0x7fffb518ba00, frame=0x7fffb6f99980,
head_of_feature_arc=1) at /root/vpp.20.01/src/vnet/ip/ip4_forward.c:1799
#3  0x76a138ea in ip4_local_node_fn_avx2 (vm=0x7fffb4ecef40, 
node=0x7fffb518ba00, frame=0x7fffb6f99980)
at /root/vpp.20.01/src/vnet/ip/ip4_forward.c:1819
#4  0x763f8078 in dispatch_node (vm=0x7fffb4ecef40, 
node=0x7fffb518ba00, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb6f99980, 
last_time_stamp=1602712959932124)
at /root/vpp.20.01/src/vlib/main.c:1208
#5  0x763f8839 in dispatch_pending_node (vm=0x7fffb4ecef40, 
pending_frame_index=3,
last_time_stamp=1602712959932124) at /root/vpp.20.01/src/vlib/main.c:1376
#6  0x763fa4d7 in vlib_main_or_worker_loop (vm=0x7fffb4ecef40, 
is_main=0)
at /root/vpp.20.01/src/vlib/main.c:1834
#7  0x763fad42 in vlib_worker_loop (vm=0x7fffb4ecef40) at 
/root/vpp.20.01/src/vlib/main.c:1941
#8  0x76439893 in vlib_worker_thread_fn (arg=0x7fffb37bf400) at 
/root/vpp.20.01/src/vlib/threads.c:1777
#9  0x75878efc in clib_calljmp () at 
/root/vpp.20.01/src/vppinfra/longjmp.S:123
#10 0x7fff9c74cc00 in ?? ()
#11 0x76433e22 in vlib_worker_thread_bootstrap_fn (arg=0x7fffb37bf400)
at /root/vpp.20.01/src/vlib/threads.c:590

static inline void
ip4_local_check_l4_csum (vlib_main_t * vm, vlib_buffer_t * b,
ip4_header_t * ih, u8 * error)
{
u8 is_udp, is_tcp_udp, good_tcp_udp;

is_udp = ih->protocol == IP_PROTOCOL_UDP;
is_tcp_udp = is_udp || ih->protocol == IP_PROTOCOL_TCP;

if (PREDICT_FALSE (ip4_local_need_csum_check (is_tcp_udp, b)))
*ip4_local_l4_csum_validate (vm, b, ih, is_udp, error, &good_tcp_udp); <==* 
ip4_forward.c: *Line 1416*
else
good_tcp_udp = ip4_local_csum_is_valid (b);

ASSERT (IP4_ERROR_TCP_CHECKSUM + 1 == IP4_ERROR_UDP_CHECKSUM);
*error = (is_tcp_udp && !good_tcp_udp
? IP4_ERROR_TCP_CHECKSUM + is_udp : *error);
}

Not really sure what to do with this information... any suggestions?

Thanks,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15471): https://lists.fd.io/g/vpp-dev/message/15471
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread dchons via Lists.Fd.Io
Hi again,

For what it's worth, I added a hack in src/plugins/dpdk/device/init.c and set 
xd->port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_UDP_CKSUM , and now I have:

vpp# sh hardware-interfaces
Name                Idx   Link  Hardware
TenGigabitEthernet5/0/0            1    down  TenGigabitEthernet5/0/0
Link speed: unknown
Ethernet address a0:36:9f:be:0c:b4
Intel 82599
carrier up full duplex mtu 9206
flags: pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 128), desc 4000 (min 32 max 4096 align 8)
tx: queues 6 (max 64), desc 4000 (min 32 max 4096 align 8)
pci: device 8086:154d subsystem 8086:7b11 address :05:00.00 numa 0
max rx packet len: 15872
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
macsec-strip vlan-filter vlan-extend jumbo-frame scatter
security keep-crc
rx offload active: ipv4-cksum *udp-cksum* jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
tcp-tso macsec-insert multi-segs security
tx offload active: udp-cksum tcp-cksum tcp-tso multi-segs
rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
ipv6-udp ipv6-ex ipv6
rss active:        none
tx burst function: ixgbe_xmit_pkts
rx burst function: ixgbe_recv_pkts

The bad news is that after making this change, vpp_echo crashes, have not had a 
chance to debug this yet but wanted to point out the potentially missing RX 
offload setting in case it is useful.

Thanks,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15472): https://lists.fd.io/g/vpp-dev/message/15472
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp_echo UDP performance

2020-02-19 Thread Florin Coras
Hi Dom, 

Was about to suggest that you used this [1] but I see you already figured it 
up. 

Let me know what’s wrong with the echo app once you get a chance to debug it.

Regards,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/25286

> On Feb 19, 2020, at 2:14 PM, dchons via Lists.Fd.Io 
>  wrote:
> 
> Hi again,
> 
> For what it's worth, I added a hack in src/plugins/dpdk/device/init.c and set 
> xd->port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_UDP_CKSUM, and now I have:
> 
> vpp# sh hardware-interfaces
>   NameIdx   Link  Hardware
> TenGigabitEthernet5/0/01down  TenGigabitEthernet5/0/0
>   Link speed: unknown
>   Ethernet address a0:36:9f:be:0c:b4
>   Intel 82599
> carrier up full duplex mtu 9206
> flags: pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
> Devargs:
> rx: queues 1 (max 128), desc 4000 (min 32 max 4096 align 8)
> tx: queues 6 (max 64), desc 4000 (min 32 max 4096 align 8)
> pci: device 8086:154d subsystem 8086:7b11 address :05:00.00 numa 0
> max rx packet len: 15872
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend jumbo-frame 
> scatter
>security keep-crc
> rx offload active: ipv4-cksum udp-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso macsec-insert multi-segs security
> tx offload active: udp-cksum tcp-cksum tcp-tso multi-segs
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
>ipv6-udp ipv6-ex ipv6
> rss active:none
> tx burst function: ixgbe_xmit_pkts
> rx burst function: ixgbe_recv_pkts
> 
> The bad news is that after making this change, vpp_echo crashes, have not had 
> a chance to debug this yet but wanted to point out the potentially missing RX 
> offload setting in case it is useful.
> 
> Thanks,
> Dom
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15473): https://lists.fd.io/g/vpp-dev/message/15473
Mute This Topic: https://lists.fd.io/mt/71401293/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-