Hi Jesse:

The VM’s are staticly mapped to cpu cores. 16 cores total, 2 cores per VM, each 
VM is running 2 threads of IPerf (client or server, this is a back to back 
connection).

All the CPU’s are running 40~45%. I have hyperthreading disabled, each VM has 
2GB of RAM, hypervisor has 32GB of RAM. Libvirt actually caches the swap 
partition in memory as much as possible, so pretty much all 32GB of RAM on the 
hypervisor is used when all 8 VMs are up.

Much Thanks
Morgan Yang

From: Jesse Gross [mailto:je...@nicira.com]
Sent: Wednesday, October 23, 2013 6:36 PM
To: Morgan Yang
Cc: discuss@openvswitch.org
Subject: Re: [ovs-discuss] Question in regard to packet errors with OVS

On Wed, Oct 23, 2013 at 6:07 PM, Morgan Yang 
<morgan.y...@radisys.com<mailto:morgan.y...@radisys.com>> wrote:
Hi All:

I am doing testing with Mellanox’s CX3 40G NIC.

The setup without OVS runs about 37~38 Gbps between multiple threads of IPERF, 
and no errors are noticed on the Ethernet interfaces

Host A Eth4(80.1.1.1) <-> Host B Eth4(80.2.1.1)

After I introduce OVS (running 2.0.0), I still get 37~38 Gbps, but I begin to 
see errors  and overruns on the physical NIC

OVSFA(80.1.1.1) <-> Host A Eth4 <-> Host B Eth4 <-> OVSFB(80.2.1.1)

root:~/mlnx# ovs-ofctl dump-ports ovsfa
OFPST_PORT reply (xid=0x2): 2 ports
  port  1: rx pkts=7262324, bytes=451601757688, drop=0, errs=363, frame=0, 
over=470, crc=0
           tx pkts=5624590, bytes=303750392, drop=0, errs=0, coll=0
  port LOCAL: rx pkts=5624537, bytes=303747170, drop=0, errs=0, frame=0, 
over=0, crc=0
           tx pkts=7262379, bytes=451601761078, drop=0, errs=0, coll=0

This is only showing up on the physical port and not on the LOCAL switch port

eth4      Link encap:Ethernet  HWaddr 00:00:50:A4:5F:AC
          inet6 addr: fe80::200:50ff:fea4:5fac/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:309077941 errors:363 dropped:0 overruns:470 frame:470
          TX packets:5624597 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:467899917070 (435.7 GiB)  TX bytes:337492360 (321.8 MiB)

ovsfa Link encap:Ethernet  HWaddr 00:00:50:A4:5F:AC
          inet addr:80.1.1.1  Bcast:80.255.255.255  Mask:255.0.0.0
          inet6 addr: fe80::1452:ceff:fe82:f794/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:7262782 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5624537 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:451601898904 (420.5 GiB)  TX bytes:303747170 (289.6 MiB)

This gets really bad when I bring up multiple VM’s and run an aggregated 
performance test

When I bring up 8 VM’s, each VM will have a VNET to OVSFA. When I try to run an 
aggregated iperf test, I can only get an aggregated throughput of 20~21 Gbps

Examining the port stats, it seems the physical link to OVSFA interface is now 
a bottleneck (see port 1 stats)

OFPST_PORT reply (xid=0x2): 10 ports
  port 14: rx pkts=993698, bytes=54019132, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=1001557, bytes=26452089418, drop=0, errs=0, coll=0
  port 16: rx pkts=1018316, bytes=55415244, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=1024662, bytes=24373022960, drop=0, errs=0, coll=0
  port 10: rx pkts=467571, bytes=25466650, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=470419, bytes=11241911122, drop=0, errs=0, coll=0
  port  1: rx pkts=14665755, bytes=623148271848, drop=0, errs=85734, frame=0, 
over=76354, crc=1
           tx pkts=12988799, bytes=704654302, drop=0, errs=0, coll=0
  port 17: rx pkts=915354, bytes=49815016, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=919841, bytes=21658413792, drop=0, errs=0, coll=0
  port 12: rx pkts=1338285, bytes=72922342, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=1343566, bytes=27474695496, drop=0, errs=0, coll=0
  port 13: rx pkts=1338687, bytes=72961466, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=1343787, bytes=27303337788, drop=0, errs=0, coll=0
  port 11: rx pkts=277265, bytes=15097566, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=278658, bytes=6790283224, drop=0, errs=0, coll=0
  port 15: rx pkts=1015033, bytes=55206482, drop=0, errs=0, frame=0, over=0, 
crc=0
           tx pkts=1021695, bytes=26252916396, drop=0, errs=0, coll=0
  port LOCAL: rx pkts=5624537, bytes=303747170, drop=0, errs=0, frame=0, 
over=0, crc=0
           tx pkts=7262947, bytes=451601929606, drop=0, errs=0, coll=0

Is there anything I can do to reduce errors and overruns? It seems the QUEUE 
between Eth4 and the OVS interface gets congested and OVS can’t switch traffic 
between too many VM’s at a very high speed.

What is the CPU usage like?
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to