* are you using KVM? If yes, can you verify that your VMs are using vhost_net, 
please? In the KVM process related to the VM there should be a vhost=on 
parameter. If not, modprobe vhost_net.

Yes.

And after enable vhost_net, I can get higher bandwidth now:

One pair, increase from 1.18 Gbits/sec  => 2.38 Gbits/sec

Six pairs,  increase from  4.25 Gbits/sec  =>  6.98 Gbits/sec



* I saw different throughput with different OS in the VM. Did you try different 
VMs?

No, but I guess different OS in VM might cause performance difference like 
5Gbit/sec vs. 6Gbit/sec, but not 2Gbit/s vs. 6Gbit/sec

* what bandwidth do you get among two VMs in the same compute node?

14.7 Gbits/sec

* can you monitor the CPU usage in the compute node and look out for big CPU 
consumer, please?

I'm running on a SNB-EP server, so I have 32 cores (HT enabled) for one compute 
node.



Only observed the compute node that run iperf client (send packages out)



1.      The average CPU%

Total average CPU% = 6.66 %

Average:     CPU    %usr   %nice    %sys   %iowait    %irq   %soft  %steal  
%guest   %idle

Average:     all         1.66    0.00      1.38      0.05         0.00    1.83  
  0.00       1.74      93.34



Per-core :

The highest CPU% = 21.65%

Average:     CPU    %usr   %nice    %sys    %iowait     %irq   %soft    %steal  
  %guest   %idle

Average:       0          0.30     0.00      5.29       0.02        0.00   
11.29      0.00       4.75      78.35



2.      CPU% time line : only check the CPU core with the highest CPU%

02:23:29 PM  CPU    %usr   %nice    %sys     %iowait    %irq        %soft    
%steal   %guest   %idle

02:23:29 PM    0         0.00    0.00      2.06       0.00         0.00         
4.12      0.00        0.00       93.81

02:23:30 PM    0         0.00    0.00     25.77       0.00         0.00        
67.01     0.00        0.00      7.22

02:23:31 PM    0         0.00    0.00     26.04       0.00         0.00        
59.38    0.00        0.00       14.58

02:23:32 PM    0         0.00    0.00     23.00       0.00         0.00        
52.00    0.00         0.00       25.00

02:23:33 PM    0         0.00    0.00     28.28       0.00         0.00        
52.53    0.00        0.00      19.19

02:23:34 PM    0         0.00    0.00     10.89       0.00         0.00        
19.80    0.00        0.00       69.31

02:23:35 PM    0         1.00    0.00      0.00       0.00         0.00         
0.00      0.00        0.00      99.00

02:23:36 PM    0         0.00    0.00      0.00       0.00         0.00         
0.00     0.00        0.00       100.00

02:23:37 PM    0         0.00    0.00      0.00       0.00         0.00         
0.00     0.00        1.00        99.00

02:23:38 PM    0         0.99    0.00      0.00       0.00         0.00         
0.00     0.00        0.00       99.01

02:23:39 PM    0         0.00    0.00      1.00       0.00         0.00         
0.00     0.00        0.00       99.00

02:23:40 PM    0         1.01    0.00      0.00       0.00         0.00         
0.00     0.00        0.00       98.99

02:23:41 PM    0         2.06    0.00     15.46       0.00         0.00        
36.08    0.00        0.00      46.39

02:23:42 PM    0         1.98    0.00     12.87       0.00         0.00        
34.65    0.00        0.00      50.50

02:23:43 PM    0         0.00    0.00      0.00       0.00         0.00         
0.00      0.00       15.15      84.85

02:23:44 PM    0         0.99    0.00      0.99       0.00         0.00         
0.00     0.00        0.00       98.02

02:23:45 PM    0         0.00    0.00      2.00       0.00         0.00         
0.00     0.00       19.00       80.00

02:23:46 PM    0         0.99    0.00      0.00       0.00         0.00         
0.00     0.00       23.76       75.25

02:23:47 PM    0         0.00    0.00     24.49       0.00         0.00        
40.82    0.00        0.00       34.69

02:23:48 PM    0         0.00    0.00     23.96       0.00         0.00        
60.42    0.00        0.00      15.62

02:23:49 PM    0         0.00    0.00     23.96       0.00         0.00        
60.42    0.00        0.00      15.62

02:23:50 PM    0         0.00    0.00      6.12       0.00         0.00        
18.37     0.00       0.00       75.51

02:23:51 PM    0         1.00    0.00      0.00       0.00         0.00         
0.00      0.00       0.00       99.00


From: Luca Giraudo [mailto:lgira...@nicira.com]
Sent: Tuesday, July 30, 2013 1:37 PM
To: Li, Chen
Cc: discuss@openvswitch.org; Gurucharan Shetty
Subject: Re: [ovs-discuss] network bandwidth in Openstack when using OVS+VLAN


Few more things to check:

* are you using KVM? If yes, can you verify that your VMs are using vhost_net, 
please? In the KVM process related to the VM there should be a vhost=on 
parameter. If not, modprobe vhost_net.

* I saw different throughput with different OS in the VM. Did you try different 
VMs?

* what bandwidth do you get among two VMs in the same compute node?

* can you monitor the CPU usage in the compute node and look out for big CPU 
consumer, please?

Thanks,
Luca
On Jul 29, 2013 6:26 PM, "Li, Chen" 
<chen...@intel.com<mailto:chen...@intel.com>> wrote:
* Is the VM ethernet driver a para-virtual driver? Para-virtual drivers give a 
good performance boost.
I have used openstack default parameters, it is virtio, I think virtio should 
have a good performance:
    <interface type='bridge'>
      <mac address='fa:16:3e:ca:4a:86'/>
      <source bridge='br-int'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='3213dbec-f2ea-462f-818b-e07b76a1752c'/>
      </virtualport>
      <target dev='tap3213dbec-f2'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' 
function='0x0'/>
    </interface>


* Is TSO ON in the VM and the Hypervisor?

The VM:
Features for eth0:
rx-checksumming: off [fixed]
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: on
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: on
        tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: on
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
The hypervisor:
Offload parameters for eth4:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on

* What throughput do you get while using Linux bridge instead of OVS?
Currently, I don't have a linux bridge environment.
But, I remember in virtio test, while I create bridge by hand and assigned it 
to an instance, I can always get near hardware limitation bandwidth if I have 
enough threads.

* Are you using tunnels? If you are using a tunnel like GRE, you will see a 
throughput drop.
No, I'm working under Quantum+OVS+VLAN.

Thanks.
-chen

From: Gurucharan Shetty [mailto:shet...@nicira.com<mailto:shet...@nicira.com>]
Sent: Tuesday, July 30, 2013 12:06 AM
To: Li, Chen
Cc: discuss@openvswitch.org<mailto:discuss@openvswitch.org>
Subject: Re: [ovs-discuss] network bandwidth in Openstack when using OVS+VLAN

There could be multiple reasons for the low throughput. I would probably look 
at the following.

* Is the VM ethernet driver a para-virtual driver? Para-virtual drivers give a 
good performance boost.
* Is TSO ON in the VM and the Hypervisor?
* What throughput do you get while using Linux bridge instead of OVS?
* Are you using tunnels? If you are using a tunnel like GRE, you will see a 
throughput drop.


On Mon, Jul 29, 2013 at 1:48 AM, Li, Chen 
<chen...@intel.com<mailto:chen...@intel.com>> wrote:
Hi list,

I'm a new user to OVS.

I installed OpenStack Grizzly, and  using Quantum + OVS + VLAN for network.

I have two compute nodes with 10 Gb NICs, and the bandwidth between them is 
about  8.49 Gbits/sec (tested by iperf).

I started one instance at each compute node:
instance-a => compute1
instance-b=> compute2
The bandwidth between this two virtual machine is only 1.18 Gbits/sec.

Then I start 6 instances at each compute node:
          (   instance-a => compute1 ) ----- iperf------ > (instance-b=> 
compute2)
                          (   instance-c => compute1 ) ----- iperf------ > 
(instance-d=> compute2)
                          (   instance-e => compute1 ) ----- iperf------ > 
(instance-f=> compute2)
                          (   instance-g => compute1 ) ----- iperf------ > 
(instance-h=> compute2)
                          (   instance-i => compute1 ) ----- iperf------ > 
(instance-j=> compute2)
                          (   instance-k => compute1 ) ----- iperf------ > 
(instance-l=> compute2)
The total bandwidth is only 4.25 Gbits/sec.


Anyone know why the performance is this low ?

Thanks.
-chen

_______________________________________________
discuss mailing list
discuss@openvswitch.org<mailto:discuss@openvswitch.org>
http://openvswitch.org/mailman/listinfo/discuss


_______________________________________________
discuss mailing list
discuss@openvswitch.org<mailto:discuss@openvswitch.org>
http://openvswitch.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to