On Feb 12, 2013, at 1:54 , ext Ben Pfaff wrote:

> On Mon, Feb 11, 2013 at 04:46:17PM +0200, Jarno Rajahalme wrote:
>> Take ofproto-dpif upcall recv pooling down to the system call interface.
>> 
>> Signed-off-by: Jarno Rajahalme <jarno.rajaha...@nsn.com>
> 
> I tried this out with my test case.  I found that it yields a small
> performance loss of about 3% with flow tables that just contain a
> "normal" action and about the same with complicated flow tables that
> contain multiple levels of resubmit.
> 
> I'm surprised that it produces such dramatically better results for
> your test case.

So was I. I expected some benefit only after patch 02/11 (recvmmsg()).

> 
> Our test cases are quite different, though.  Yours has a constant rate
> at the source, and measures the fraction of packets that makes it
> through.  Mine essentially never drops a packet because it only sends
> a new packet when the reply to a previous one has been received.  I am
> not certain which is a better model for actual network behavior.  Do
> you have any thoughts


I designed the test case to show the difference in throughput on upcall 
processing path under heavy load. In this case any CPU savings would show up as 
increased throughput. Your test case is probably more sensitive to latency, 
which I did not specifically look for.

Based on your result I would think there might be only a few packets coming up 
at a time, where the small set-up cost for batching would not be amortized. In 
this case small latency increase could show in your test.  One way to test for 
this is to temporarily reduce the batch size (in ofproto_dpif), and see if that 
makes any difference.

It may also be that there is some unnecessary overhead causing extra latency, 
I'll see if I can find something.

  Jarno
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to