On Jun 17, 2014, at 11:33 AM, Ethan Jackson <et...@nicira.com> wrote:

>> A good reason to offload an ofproto upcall function in polling mode is to 
>> allow a different CPU to do a time consuming inexact rule matches while the 
>> polling thread maintains fast packet switching. At low data and packet rates 
>> or low rate Ethernet interface (1 GbE and lower) this does not matter, 
>> however when higher packet rates are achieved this is going to be critical 
>> since the input queue will get easily overrun at 10 GbE rates with moderate 
>> delays, especially with smaller packet sizes.
> 
> So it turns out that this time consuming work is actually critical to
> the functioning of the switch.  We've found in our production
> deployments, that the number one pain point is actually the number of
> PPS we can shove through the OpenFlow slow path.  This directly
> impacts your connections per second number, which is critical for
> certain types of applications.  With this patch, we get a roughly 10x
> improvement in performance through the slow path.  Plus we free up a

10x flow setup rate, nice!

> bunch of threads which we can re-use as pmd threads.  I suspect that
> in aggregate, we're going to have a preferable performance situation
> as a result of this, than if we maintain the status quo.
> 

I guess a reasonable test case here would be a long-lived high throughput 
(elephant?) flow interleaved by a port scan in some realistic fashion (?). If 
we can still serve the long-lived high throughput flow while processing the 
upcalls in the same thread we should be good to go. It may be that we need to 
start throttling/scheduling the upcalls when they would start disturbing the 
fast path flows.

(snip)

> 
>> Directly calling ofproto upcall functions, before inexact rule lookup code 
>> is highly optimized for lookup speed when having large number of rules, 
>> would make it more difficult to get the DPDK packet processing rate up and 
>> also to test and verify fast packet processing rates.
> 
> I'm not sure I followed your point here.
> 

If upcall processing and/or megaflow lookup takes too long in any of (the?) 
fast packet processing test cases, we might start seeing lower throughput than 
before? 

  Jarno

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to