So generally the IFB netdev is used to enforce QoS on egress when the
traffic is ingress in reality i.e. one can redirect ingressing traffic (or
even egress for that matter) to an IFB netdev and it will hit the egress
qdisc on  IFB, where we can configure the egress QoS. However, another
usecase is to have consolidated traffic arrive on IFB for QoS .i.e. ingress
traffic from different netdevs redirected to IFB netdev for a combined QoS.
And my use-case is similar, where I want traffic arriving on several ports,
including an ovs-enslaved port to be redirected to IFB for QoS. So, in that
case, I believe, enslaving IFB into ovs will not work for me. If ovs
supports some sort of ingress redirection, then that might help!

Thanks & Regards,
RS

On Mon, Jan 15, 2024 at 10:23 PM Ilya Maximets <i.maxim...@ovn.org> wrote:

> On 1/15/24 21:05, Reshma Sreekumar via discuss wrote:
> > I see..thanks for the explanation. My use-case is to have a traffic
> redirection
> > from an ovs-enslaved port to IFB netdevice on ingress Qdisc. Considering
> the
> > given mode of operation,may be it's better if I opt for some other
> methods of
> > redirection to IFB netdevice?
>
> I never worked with IFB interfaces, so I'm not 100% sure that will work,
> but you may remove your port from OVS and add you IFB interface instead.
> Egress qdisc on IFB will be in use, you may protect it from being removed
> by using linux-noop QoS type.
>
> I suppose, the traffic will enter OVS after IFB.  So, you'll have your
> veth1 (ingress qdisc)  -->  ifb (egress qdisc) --> OVS.  Should be fine
> if you're going to redirect all the traffic to ifb.
>
> Let us know if that works.
>
> Best regards, Ilya Maximets.
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to