I'm not the best to answer the specific technical problems we had, I'm sure 
others from Rackspace private cloud will chime in on that. At a high level 
though we did have customers see issues with high numbers of flows, including 
the 64k limit, where it would simply fall over. We were also having trouble in 
general with kernel/ ovs mismatches, regressions between OVS versions, and 
kernel panics.

We moved to Linuxbridge/ML2 and its corresponding neutron plugin and it appears 
to be much more stable so far.  From a feature perspective we haven't seen a 
huge drop in terms of what we actually use.

Jason 



On Sep 27, 2014, at 11:41, Tim Bell <tim.b...@cern.ch> wrote:

>> -----Original Message-----
>> From: Dennis Jacobfeuerborn [mailto:denni...@conversis.de]
>> Sent: 27 September 2014 13:26
>> To: openstack@lists.openstack.org
>> Subject: Re: [Openstack] Rackspace abandons Open vSwitch ?
>> 
>>> On 27.09.2014 06:37, Jason Kölker wrote:
>>> On Sat, Sep 27, 2014 at 3:50 AM, Raghu Vadapalli <rvatspac...@gmail.com>
>> wrote:
>>>> As per this news article listed below, Rackspace is abandoning Open 
>>>> vSwitch.
>>>> Is this where everyone else going  in general ?
>>> 
>>> That conclusion is inaccurate. The entirety of the public cloud runs
>>> openvswitch for both public/servicenet connectivity as well as
>>> isolated tenant network features.The article is referring to the
>>> private cloud distribution no longer choosing to use the Neutron
>>> OpenVswitch plugin
>>> (https://github.com/openstack/neutron/tree/master/neutron/plugins/open
>>> vswitch) as it is being deprecated. The ML2 plugin replaces this and
>>> can use a variety of mechanisms including openvswitch.
>>> 
>>> The article's conclusion that openvswitch is not ready for production
>>> and high-volume workloads is ludicrous. Versions 2.0+ perform very
>>> well with multithreading in the vswitchd process and megaflows in the
>>> datapath. However it is important to point out that datapath
>>> performance is very much related to the flows programmed. A poorly
>>> written flow set will result in bad performance. Tuning the flows and
>>> optimizing the ability for megaflow'ing is the key to high throughput.
>> 
>> That's the theory though and the article seems to talk about practical 
>> problem
>> Rackspace ran into with OVS so it would have been nice to learn what
>> specifically was the problem.
>> 
>> What are the alternatives though? As far as I know the regular linux bridge 
>> lacks
>> most of the features of OVS and these are the only to options I've played 
>> with so
>> far. Is the a third alternative out there that they've switched to?
> 
> I also am not clear on what the best option is for a scalable open source 
> Neutron plug in. The surveys are regularly reporting open vswitch is the most 
> commonly used but it is not clear if that is the best one for production at 
> scale. Are there any references of real life usage of OVS at the 1000s of 
> hypervisors level ?
> 
> Tim
> 
>> Regards,
>>  Dennis
>> 
>> 
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to