On 1/22/14 4:10 PM, "Ben Pfaff" <b...@nicira.com> wrote:

>On Wed, Jan 22, 2014 at 09:04:48PM +0000, McGarvey, Kevin wrote:
>> 
>> 
>> On 1/22/14 3:23 PM, "Ben Pfaff" <b...@nicira.com> wrote:
>> 
>> >On Wed, Jan 22, 2014 at 08:17:05PM +0000, McGarvey, Kevin wrote:
>> >> 
>> >> 
>> >> On 1/22/14 12:44 PM, "Ben Pfaff" <b...@nicira.com> wrote:
>> >> 
>> >> >On Wed, Jan 22, 2014 at 09:39:14AM -0800, Ben Pfaff wrote:
>> >> >> On Wed, Jan 22, 2014 at 05:35:40PM +0000, McGarvey, Kevin wrote:
>> >> >> > 
>> >> >> > 
>> >> >> > On 1/21/14 6:17 PM, "Ben Pfaff" <b...@nicira.com> wrote:
>> >> >> > >I'd expect a dramatic drop in CPU consumption in that case.
>>There
>> >> >>are
>> >> >> > >a few special cases where the upgrade wouldn't help.  One is if
>> >> >> > >in-band control is in use, another is if NetFlow is turned on,
>>a
>> >> >>third
>> >> >> > >is if LACP bonds with L4 port based hashing are turned on, and
>> >>there
>> >> >> > >are probably a few others that don't come to mind immediately.
>> >> >> > 
>> >> >> > I plan to rerun the test to rule out some mistake on my part.
>> >> >> > 
>> >> >> > Could you provide more information about the nature of the
>>change
>> >> >>made in
>> >> >> > 1.11 that improves performance for this type of traffic?  Is the
>> >> >>kernel
>> >> >> > module able to forward UDP DNS packets without sending them to
>> >> >>userspace,
>> >> >> > or was it an optimization of the userspace processing?  What
>> >>roughly
>> >> >>is
>> >> >> > the level of performance I should see?
>> >> >> 
>> >> >> In 1.11 and later, for simple OpenFlow tables (I don't think you
>> >> >> mentioned whether you are using a controller or which one), Open
>> >> >> vSwitch can set up only a single kernel flow that covers many
>> >>possible
>> >> >> flows, for example all possible UDP destination ports, rather than
>> >> >> setting up an individual kernel flow for each UDP packet.  When
>>that
>> >> >> works, it eliminates most of the kernel/userspace traffic,
>>improving
>> >> >> performance.  Version 2.0 is better at analyzing OpenFlow flow
>>tables
>> >> >> to see when this is possible, so it can better take advantage of
>>the
>> >> >> ability.
>> >> >
>> >> >I see that I didn't answer your question about performance.
>> >> >
>> >> >When this optimization kicks in fully, I guess that the performance
>> >> >should be about the same as for traffic with long flows (like the
>> >> >netperf TCP_STREAM test, for example) in terms of packets per
>>second.
>> >> 
>> >> Thanks.  This is encouraging.  The only question is why isn't the
>> >> optimization kicking in?
>> >> 
>> >> 
>> >> I repeated the test, and under a load of 10K DNS requests/responses
>>per
>> >> second ovs-vswitchd is using 82% of a core.
>> >> 
>> >> I wasn't sure whether in-band control was on or off by default, so I
>> >> disabled it with the command below and restarted openvswitch, but the
>> >>cpu
>> >> consumption didn't change:
>> >> 
>> >> ovs-vsctl set bridge <bridge> other-config:disable-in-band=true
>> >> 
>> >> I did not set up the configuration, but as far as I can tell Netflow
>>is
>> >> not turned on.  The output of 'ovsdb-tool -show-log | grep -i
>>netflow'
>> >>is
>> >> empty.
>> >> 
>> >> There are no bonded interfaces.  The 2 NICs used for DNS traffic are
>> >> associated with separate bridges.
>> >> 
>> >> We are not using a controller.
>> >> 
>> >> In your response you mentioned that for simple OpenFlow tables Open
>> >> vSwitch can set up a single kernel flow that covers many possible
>>flows.
>> >> I think this is exactly what I need.  Do I need to add a flow using
>> >> ovs-ofctl?  
>> >
>> >No.  With the settings you describe, it should kick in automatically.
>> >
>> >Here is an experiment that might help.  Take one of the flows that
>> >"ovs-dpctl dump-flows" prints, then feed that flow back into
>> >"ovs-appctl ofproto/trace", and show us the results.  (You might have
>> >to spend a few minutes reading the ovs-vswitchd manpage to learn the
>> >ofproto/trace syntax, if you don't already know it.)
>> 
>> Below is the ofproto/trace output for an inbound request to bridge
>>brsvr2.
>>  One more piece of information is that the packets are going through a
>> load balancer.
>
>It looks very much to me like you are using an OVS kernel module that
>is too old to support this feature.  Are you using the kernel module
>that came with OVS 1.11, or a kernel module that came with your kernel
>(which kernel version), or some other module?  ("dmesg|grep Open" can
>help find out.)

Here's the dmesg output:

dmesg|grep Open
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
openvswitch: Open vSwitch switching datapath

The dmesg output didn't seem very informative, so I ran modinfo:

modinfo openvswitch
filename:       
/lib/modules/2.6.32-358.123.4.openstack.el6.x86_64/kernel/net/openvswitch/o
penvswitch.ko
license:        GPL
description:    Open vSwitch switching datapath
srcversion:     19E48B3ED642482269914B5
depends:        vxlan
vermagic:       2.6.32-358.123.4.openstack.el6.x86_64 SMP mod_unload
modversions


The ovs kernel module came with the kernel which is below.  I upgraded to
this kernel on the recommendation of one of our engineers who works a lot
with OpenStack.

2.6.32-358.123.4.openstack.el6.x86_64 #1 SMP Wed Oct 30 13:52:57 EDT 2013
x86_64 x86_64 x86_64 GNU/Linux

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to