Hi Thiago,

for the VIF error: you will need to change qemu.conf as described here:
http://openvswitch.org/openstack/documentation/

Re, Darragh.




On Friday, 25 October 2013, 15:14, Martinx - ジェームズ <thiagocmarti...@gmail.com> 
wrote:
 
Hi Darragh,
>
>
>Yes, Instances are getting MTU 1400.
>
>
>I'm using LibvirtHybridOVSBridgeDriver at my Compute Nodes. I'll check BG 
>1223267 right now! 
>
>
>
>
>The LibvirtOpenVswitchDriver doesn't work, look:
>
>
>http://paste.openstack.org/show/49709/
>
>
>
>http://paste.openstack.org/show/49710/
>
>
>
>
>
>My NICs are "RTL8111/8168/8411 PCI Express Gigabit Ethernet", Hypervisors 
>motherboard are MSI-890FXA-GD70.
>
>
>The command "ethtool -K eth1 gro off" did not had any effect on the 
>communication between instances on different hypervisors, still poor, around 
>248Mbit/sec, when its physical path reach 1Gbit/s (where GRE is built).
>
>
>My Linux version is "Linux hypervisor-1 3.8.0-32-generic #47~precise1-Ubuntu", 
>same kernel on Network Node" and others nodes too (Ubuntu 12.04.3 installed 
>from scratch for this Havana deployment).
>
>
>The only difference I can see right now, between my two hypervisors, is that 
>my second is just a spare machine, with a slow CPU but, I don't think it will 
>have a negative impact at the network throughput, since I have only 1 Instance 
>running into it (plus a qemu-nbd process eating 90% of its CPU). I'll replace 
>this CPU tomorrow, to redo this tests again but, I don't think that this is 
>the source of my problem. The MOBOs of two hypervisors are identical, 1 3Com 
>(manageable) switch connecting the two.
>
>
>Thanks!
>Thiago
>
>
>
>On 25 October 2013 07:15, Darragh O'Reilly <dara2002-openst...@yahoo.com> 
>wrote:
>
>Hi Thiago,
>>
>>you have configured DHCP to push out a MTU of 1400. Can you confirm that the 
>>1400 MTU is actually getting out to the instances by running 'ip link' on 
>>them?
>>
>>There is an open problem where the veth used to connect the OVS and Linux 
>>bridges causes a performance drop on some kernels - 
>>https://bugs.launchpad.net/nova-project/+bug/1223267 .  If you are using the 
>>LibvirtHybridOVSBridgeDriver VIF driver, can you try changing to 
>>LibvirtOpenVswitchDriver and repeat the iperf test between instances on 
>>different compute-nodes.
>>
>>What NICs (maker+model) are you using? You could try disabling any off-load 
>>functionality - 'ethtool -k <iface-used-for-gre>'.
>>
>>What kernal are you using: 'uname -a'?
>>
>>Re, Darragh.
>>
>>
>>> Hi Daniel,
>>
>>>
>>> I followed that page, my Instances MTU is lowered by DHCP Agent but, same
>>> result: poor network performance (internal between Instances and when
>>> trying to reach the Internet).
>>>
>>> No matter if I use "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf +
>>> "dhcp-option-force=26,1400"" for my Neutron DHCP agent, or not (i.e. MTU =
>>> 1500), the result is almost the same.
>>>
>>> I'll try VXLAN (or just VLANs) this weekend to see if I can get better
>>> results...
>>>
>>> Thanks!
>>> Thiago
>>
>>
>>_______________________________________________
>>Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>Post to     : openstack@lists.openstack.org
>>Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
>
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to