Hi all,
in our test scenario we observe comparably low packet forwarding
performance on a dpdk (2.0) enabled ovs (2.4) switch.
The configuration is a single bridge (br0) with two dpdk enabled
interfaces (10 Gbit/s each) as per the Install.DPDK.md example.
We followed *thoroughly* all performance related optimization steps from
the above document.

Nonetheless the best rate we could get is ca. 2 M packets/s  on each
interface, which equals to about ca. 1.3 Gbit/s for 64byte sized packets.
Our comparison is against the “l2fwd” example dpdk application. Using
“l2fwd“ we can easily achieve 13 M packets /s in the same test
configuration. The only indicator of something going wrong is a high
count of ”errs“ packets in the output of “ovs-ofctl dump-ports”.

In our first runs (using 2 pmd threads) we noticed a high pmd thread
utilization as reported by the $ovs-appctl dpif-netdev/pmd-stats-show
command. We made sure to clear the statistics after every observation to
get an accurate report for the next interval. To mitigate this we
allowed dpdk to use up to 8 cores on a 12 core (24 ht threads) cpu. We
also set “other_config:n-dpdk-rxqs=20”. Now we observed “idle“ pmd
threads but still the packet forwarding performance is limited to the
above values.
We also experimented with the amount of huge pages memory.

Do you know of any *recent* reports on OVS dpdk performance? I found
some papers on the old dpdk-vswitch versiont which claim much higher
pkt. counts than I observed.

What further debugging steps can I make? Below I include the output of
“ovs-ofctl dump-ports br0" and “ovs-ofctl show br0"
Note that the output of  says „Max Speed 1000” despite having 10 Gbit
Interfaces attached. I am not sure wether this is an error.

> $ovs-ofctl dump-ports br0
> OFPST_PORT reply (xid=0x2): 3 ports
>  port  2: rx pkts=5, bytes=925, drop=0, errs=0, frame=0, over=0, crc=0
>           tx pkts=47128219, bytes=3016206016, drop=0, errs=0, coll=0
>  port  1: rx pkts=47130239, bytes=3016335901, drop=0, errs=84435109,
> frame=0, over=0, crc=0
>           tx pkts=0, bytes=0, drop=0, errs=0, coll=0
>  port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
>           tx pkts=47128251, bytes=2827695060, drop=47128251, errs=0,
> coll=0

> $ovs-ofctl show  br0
> OFPT_FEATURES_REPLY (xid=0x2): dpid:00005cb9018f16e0
> n_tables:254, n_buffers:256
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
> mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
> mod_tp_dst
> 1(dpdk0): addr:5c:b9:01:xx:xx:xx
>     config:     0
>     state:      0
>     current:    10GB-FD
>     supported:  100MB-FD 1GB-HD 1GB-FD FIBER AUTO_NEG AUTO_PAUSE
> AUTO_PAUSE_ASYM
>     speed: 10000 Mbps now, 1000 Mbps max
> 2(dpdk1): addr:5c:b9:01:xx:xx:xx
>     config:     0
>     state:      0
>     current:    10GB-FD
>     supported:  100MB-FD 1GB-HD 1GB-FD FIBER AUTO_NEG AUTO_PAUSE
> AUTO_PAUSE_ASYM
>     speed: 10000 Mbps now, 1000 Mbps max
> LOCAL(br0): addr:5c:b9:01:xx:xx:xx
>     config:     PORT_DOWN
>     state:      LINK_DOWN
>     current:    10MB-FD COPPER
>     speed: 10 Mbps now, 0 Mbps max
> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to