On Wed, May 25, 2016 at 03:35:44PM +0530, bscha...@redhat.com wrote: > ovn-northd processes the list of Port_Bindings and hashes the list of > queues per chassis. When it finds a port with qos_parameters and without > a queue_id, it allocates a free queue for the chassis that this port belongs. > The queue_id information is stored in the options field of Port_binding table. > Adds an action set_queue to the ingress table 0 of the logical flows > which will be translated to openflow set_queue by ovn-controller > > ovn-controller opens the netdev corresponding to the tunnel interface's > status:tunnel_egress_iface value and configures a HTB qdisc on it. Then for > each SB port_binding that has queue_id set, it allocates a queue with the > qos_parameters of that port. It also frees up unused queues. > > This patch replaces the older approach of policing > > Signed-off-by: Babu Shanmugam <bscha...@redhat.com>
This is starting to look really good! I'm becoming enthusiastic about it now. Nevertheless, I have some comments. Please write comments like /* This is a comment. */ to match the existing OVS style. In parse_set_queue_action(), please make the error message more helpful by adding details, maybe something like "Queue ID %d for set_queue is not in valid range %d to %d." QDISC_MIN_QUEUE_ID and QDISC_MAX_QUEUE_ID are declared in two different places. I'd suggest putting them into ovn/lib/actions.h. In join_logical_ports(), the cast to uint32_t doesn't seem necessary. In setup_qos(), this || should probably be &&: if (sb_info->max_rate == smap_get_int(&queue_details, "max-rate", 0) || sb_info->burst == smap_get_int(&queue_details, "burst", 0)) { setup_qos() could use NETDEV_QUEUE_FOR_EACH for a small amount of convenience. I'm a little worried about the way that this will re-set all of the queues on every iteration through the ovn-controller main loop. That could have some performance drawbacks. However, maybe it's not worth worrying about it until it's a real problem in practice. This patch makes ovn-controller configure queues directly on the egress interface. That is completely fine. However, it means that if the egress interface is on an OVS bridge (which is often reasonable, especially if bonding is involved), then ovs-vswitchd will fight with ovn-controller for control of the queues. This is a long-time issue in ovs-vswitchd, and there has been previous discussion: http://openvswitch.org/pipermail/discuss/2015-May/017687.html (Currently, I favor the solution proposed there of adding a new QoS type.) I don't think it's a good assumption that all the tunnels share the same egress interface. Thanks, Ben. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev