On Wed, Apr 27, 2016 at 08:26:46PM +0530, Babu Shanmugam wrote:
> On Tuesday 26 April 2016 08:43 PM, Ben Pfaff wrote:
> >On Mon, Apr 25, 2016 at 04:41:40PM +0530, Babu Shanmugam wrote:
> >>On Friday 22 April 2016 10:51 PM, Ben Pfaff wrote:
> >>>On Fri, Apr 22, 2016 at 12:44:12PM +0530, bscha...@redhat.com wrote:
> >>>>From: Babu Shanmugam <bscha...@redhat.com>
> >>>>
> >>>>Following are done through this series
> >>>>1. Changed the old approach of policing the packets. It is now shaped
> >>>>    with queues. Changed the Logical_Port options for NB db
> >>>>2. Support of DSCP marking through options field in Logical_Port table
> >>>>
> >>>>Babu Shanmugam (2):
> >>>>   ovn: Replace the QOS policing parameters with the usage of QOS table
> >>>>   ovn: QOS DSCP markings for ports
> >>>Have you tested this?  There are at least two aspects that seem relevant
> >>>to testing.  First, propagating queuing through tunnels is somewhat
> >>>indirect and one needs to make sure that the QoS configuration actually
> >>>makes it to the physical device.  Second, HTB has a reputation for poor
> >>>quality for links above about 1 Gbps, which isn't very fast
> >>>anymore--that's why we also support HFSC.
> >>Ben, I have not tested these aspects. The reason I used HTB is mainly
> >>because it supports
> >>burst setting. From vswithc.conf.db man page, HFSC does not seem to have an
> >>option
> >>for burst setting.
> >>I could not understand how "propagating queuing through tunnels is somewhat
> >>indirect". I can test it if you
> >>can give some more information on the problem.
> >Usually for shaping it only makes sense to configure it on the physical
> >NIC network device.  Does your series do that?  If you haven't tested
> >it, it's hard for me to imagine it working.
> >
> >Why is burst setting valuable?
> I tried testing the HTB rate params on a veth device. It seems to work.
> Thanks to iperf, we can see that it tries to control the traffic egressing
> the veth device. With out the HTB queue, it occupies the full bandwidth.

OK.

> I faced some problems while testing. How much ever max-rate I try to set in
> the Queue table, I get a kernel warning "HTB: quantum of class 1FFFE is big.
> Consider r2q change". I see that OVS is using r2q = 10. I tried a rate as
> high as 20000 to as low as 600, it still gives me that message. Because of
> this the kernel assigns a fixed quantum of 200000 always. I am still
> debugging this.

Possibly, lib/netdev-linux.c should automatically choose an appropriate
r2q value.  Currently it does always use 10.

http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm has a little
information about r2q.

> Openstack QOS policies demands burst setting as part of bandwidth rules.
> But, I am not sure how to get the burst working in HFSC.

OK.

http://unix.stackexchange.com/questions/96494/about-hfsc-parameters has
some information related to burst for hfsc.

But I'm still having trouble understanding the whole design here.
Without this patch, OVN applies ingress policing to packets received
from (typically) a VM.  This limits the rate at which the VM can
introduce packets into the network, and thus acts as a direct (if
primitive) way to limit the VM's bandwidth resource usage on the
machine's physical interface.

With this patch, OVN applies shaping to packets *sent* to (typically) a
VM.  This limits the rate at which the VM can consume packets *from* the
network.  This has no direct effect on the VM's consumption of bandwidth
resources on the network, because the packets that are discarded have
already consumed RX resources on the machine's physical interface and
there is in fact no direct way to prevent remote machines from sending
packets for the local machine to receive.  It might have an indirect
effect on the VM's bandwidth consumption, since remote senders using
(e.g.) TCP will notice that their packets were dropped and reduce their
sending rate, but it's far less efficient at it than shaping packets
going out to the network.

The design I expected to see in OVN, eventually, was this:

        - Each VM/VIF gets assigned a queue.  Packets received from the
          VM are tagged with the queue using an OpenFlow "set_queue"
          action (dunno if we have this as an OVN logical action yet but
          it's easily added).

        - OVN programs the machine's physical interface with a linux-htb
          or linux-hfsc qdisc that grants some min-rate to each queue.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to