On 5/16/16, 1:07 PM, "Ben Pfaff" <b...@ovn.org> wrote:

>Hi Bryan, I think that you understand how QoS works in NVP.  We're
>currently talking about how to implement QoS in OVN.  Can you help us
>understand the issues?
>
>...now back to the conversation already in progress:
>
>On Tue, May 10, 2016 at 05:04:06PM +0530, Babu Shanmugam wrote:
>> On Friday 06 May 2016 10:33 PM, Ben Pfaff wrote:
>> >But I'm still having trouble understanding the whole design here.
>> >Without this patch, OVN applies ingress policing to packets received
>> >from (typically) a VM.  This limits the rate at which the VM can
>> >introduce packets into the network, and thus acts as a direct (if
>> >primitive) way to limit the VM's bandwidth resource usage on the
>> >machine's physical interface.
>> >
>> >With this patch, OVN applies shaping to packets *sent* to (typically) a
>> >VM.  This limits the rate at which the VM can consume packets*from*
>>the
>> >network.  This has no direct effect on the VM's consumption of
>>bandwidth
>> >resources on the network, because the packets that are discarded have
>> >already consumed RX resources on the machine's physical interface and
>> >there is in fact no direct way to prevent remote machines from sending
>> >packets for the local machine to receive.  It might have an indirect
>> >effect on the VM's bandwidth consumption, since remote senders using
>> >(e.g.) TCP will notice that their packets were dropped and reduce their
>> >sending rate, but it's far less efficient at it than shaping packets
>> >going out to the network.
>> >
>> >The design I expected to see in OVN, eventually, was this:
>> >
>> >         - Each VM/VIF gets assigned a queue.  Packets received from
>>the
>> >           VM are tagged with the queue using an OpenFlow "set_queue"
>> >           action (dunno if we have this as an OVN logical action yet
>>but
>> >           it's easily added).
>> >
>> >         - OVN programs the machine's physical interface with a
>>linux-htb
>> >           or linux-hfsc qdisc that grants some min-rate to each
>> >           queue.
>>
>> From what I understand, to setup egress shaping for a VIF interface
>> - We need a physical interface attached to br-int
>
>It doesn't have to be attached to br-int, because queuing information is
>preserved over a hop from bridge to bridge and through encapsulation in
>tunnels, but OVN would have to configure queues on the interface
>regardless of what bridge it was in.
>
>> - QOS and Queue tables has to be setup for the port entry that
>>corresponds
>> to the physical interface
>> - Packets received from VIF are put in these queues using set_queue.
>> Is my understanding correct?
>
>Yes, I believe so.
>
>> Is there any way that HTB/HFSC queues can work without a physical
>>interface
>> attached to br-int? if not are we going to mandate in some way that a
>> physical interface has to be attached to br-int?
>
>I don't think it's desirable or necessary to attach the physical
>interface to br-int, only to ensure that the queues are configured on
>it.
>
>Bryan, what does NVP do?  In particular, does it configure queues on a
>physical interface without caring what bridge it is attached to?

Hi folks,

Apologies for the delay I had to page a lot of this back in as I haven¹t
hacked on QoS in NVP in quite a while.

I would agree that forcing the physical interface to be on integration
bridge purely for the purposes of QoS sounds undesirable and unnecessary.
For NVP, it just cares that there is a bridge with the same name specified
by the tunnel port¹s tunnel_egress_iface value, and that said bridge
contains what we consider to be a PIF (which is where we put the physical
QoS objects).
  
Here¹s what we do on the NVP side at a high level on a simple
non-extender, no NIC bonding, overlay setup:

1. User creates a logical queue (with settings for min/max rate, dscp,
qos_marking, etc).
2. User attaches said logical queue to a logical switch port (VM on a HV
somewhere).
3. We then determine the physical tunnel port that the logical port¹s
VIF¹s traffic is going to egress the HV, i.e. ³stt1234².
4. From said tunnel port, we pull the ³tunnel_egress_iface² from the
status column that points to the bridge that it will actually egress, i.e.
³eth0².
5. We then look at all physical ports on bridge with that name, and use
some heuristics to determine which port(s) is/are an actual PIF, i.e.
³eth0².
6. Now we know the PIF the traffic will utilize, so we put a physical
queue collection on it.
7. Next we create a physical queue and put it in the collection, with
user-defined settings and a designated number X (computed via separate
logic).
8. Lastly on the flow for the logical port¹s ingress we add an action,
setqueue:X.

The collection might have other queues in it for the other VIFs on the
chassis, and they might all use eth0 but each will get it¹s own queue
(unless queue sharing is involved, which is a side discussion).

Does that help? Obviously this leaves out a mountain of intricate details,
but these are pretty much the main steps taken to go from user space
configuration down to creating things on the HV. Let me know if you have
any questions with this (or any other aspects of QoS in NVP such as
DSCP/marking, etc)- I¹m happy to help!

.:bryan












_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to