On Thu, Jun 14, 2012 at 1:24 PM, ravi kerur <rke...@gmail.com> wrote:
> On Wed, Jun 13, 2012 at 7:58 PM, Jesse Gross <je...@nicira.com> wrote:
>> On Thu, Jun 14, 2012 at 4:51 AM, ravi kerur <rke...@gmail.com> wrote:
>>> Ok got it. At least I am sensing that OVS will/should be optimized for
>>> both core and edge cases. I have taken care of comments from Pravin
>>> and I think we are waiting for Ben's input on ttl handling?
>>
>> It's not a question of optimization.  If you implement something it
>> needs to be properly designed and work.  The things you are
>> implementing have different use cases so you need to think both about
>> those and that fact that OVS/OpenFlow are designed to enable new uses.
>>  You must come up with solutions that are flexible and extensible, not
>> just ones that address your current use case.
>
> <rk> what are those new use cases? atleast I would like to understand that

They're new and unknown, so you can't know.  This is the single most
important point that I've been trying to get across: you need to
design things in a way that is flexible so that when something new
comes up we either have the building blocks there (ideal) or we can
extend what is there.  It's why I don't want to directly implement
what's in OpenFlow, even if that is the goal right now.  I'm certain
that new things will come out and I don't want a dozen different
implementations.

>>> There are additional things that needs to be addressed as well
>>>
>>> 1. offload code review, it's currently generic enough and getting near
>>> line rate performance numbers.
>>
>> You can't just take what is done for vlans and copy that.  There is
>> far too much code that you're adding to OVS.  Did you read what I
>> wrote earlier about where to start?:
>
> <rk> How is adding far too much code to ovs related to offload? It is
> handled similar to what vlan is doing for older kernel + in addition
> it takes care of handling copy + restore skb->protocol since
> skb_gso_segment relies on it and handle cases for non-gso packets to
> calculate checksum. I don't understand your comments, have you looked
> at latest patch?

The vlan code that's there is backporting and emulation for various
quirks of vlans on different kernels.  Most of these don't apply to
MPLS because no version of Linux supports MPLS.  You can't start from
the backported version though, you need to begin with the correct way
to do things assuming that you have freedom to modify the upstream
kernel because OVS is upstream now and that's the future.  Once you
have things working there, you can backport to older versions but if
you do it in the opposite order you just end up with a mess.  Once
again, did you read what I wrote below?  I know that your code isn't
correct just by looking at the diff stat because you didn't modify the
file that I told you is the place to start.

>> Generally speaking the emulation code is handled by skb_gso_segment()
>> in dev.c in the kernel code outside of OVS.  This should mostly work
>> except that it needs to be able to detect that MPLS requires
>> emulation.  This will be the easiest part to get working and is the
>> best place to start.  However, in order for this code to work on any
>> kernel before your changes get integrated (i.e. Linux 3.6 at the
>> earliest) you'll have to emulate it in OVS as well, like we do for
>> vlans in vport-netdev.c.
>>
>>> 2. vlan/mpls tunneling in gre or ip, performance numbers are not good.
>>> However, using latest git code I tried sending vlan tagged packets on
>>> gre tunnel and performance is bad as well. I will have to look into it
>>> further because its not yet clear to me why performance would be bad.
>>
>> I think if you get point #1 working correctly then tunneling
>> offloading should just work.
>
> <rk> As i mentioned earlier, can you elaborate your earlier comments?

If you make skb_gso_segment handle MPLS then tunneling will just work
because it already calls that function to do the emulation.

>>
>>> Finally, are performance numbers for OVS published or documented?
>>
>> There would be too much variation between machines to be useful for
>> development.  Just run your test on master and compare.
>
> <rk> that's what I did for vlan + gre and performance numbers looked
> bad and hence asked the question

I think you're testing on a 1G link; OVS should be able to handle
everything at 1G without a problem.  Why don't you just post what
you're doing and what you're seeing?
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to