Justin Pettit wrote:
On Sep 14, 2015, at 12:57 AM, Miguel Angel Ajo<mangel...@redhat.com>  wrote:

Egress from the VM/port point of view, or from the switch point of view?.

I work on the switch, so it's always from its point of view.  :-)

I was more used to the other point of view :), which brings me a lot of confusion when I talk to bridge guys ;D

On the original question, there's a big FAQ entry about configuring QoS in the 
FAQ:

        https://github.com/openvswitch/ovs/blob/master/FAQ.md

Btw, Justin, I follow up with the question :) , is it possible to use the qos table to do 
"egress" from the VM/port point of view?, I only managed to do that with the 
ingress policing [1], is there any way to use queues for switch ingress (port egress), 
may be combining openflow + enqueue action?

Unless it's changed recently, there's not really a good way to do ingress 
shaping in Linux.  You can use the IFB device to allow shaping of ingress 
traffic in Linux, but I think it has negative performance implications and some 
other limitations.  I'm not aware of anyone using IFB with OVS, though.
I'm just remembering a conversation I had with Thomas a few months ago, and then it made sense, but my inverted point of view made me think something had changed because I was able to do VM-ingress shaping. Of course, it was bridge egress what I was doing, and that's why it worked. I also remember a possible workaround plugin a veth to ovs, the other end to a namespace, and using routing to reflect any incoming packet, so you gain a point of control over the egress path of the packets.

I noticed that ingress_policing_rate / _burst parameters on VM tap ports didn't drop any packets opposed to what documentation says. I may need to re-check that, but I didn't see any packet drop counters raising, could it be because as being on the same host, the sender get's blocked?, or that's not possible and I didn't look at the right counters... I understand that, from a remote host, once the packet was received you either throw it, queue it, or use it.., but locally, if the linux stack is smart enough (it could block the in-host sender).

I'm looking at all this because we're interested in adding support for traffic classification in neutron qos, so we can apply certain ingress/egress shaping (or dscp/../etc marking) based on packet matching.




--Justin



_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to