On 10/29/22 18:42, Alex Weasel via discuss wrote:
> Additionally i seeing such messages in ovs log when new port is attached
> 2022-10-29T16:39:41.844Z|20936|netdev_linux|DBG|unknown qdisc "noqueue"
> 2022-10-29T16:39:41.990Z|20988|jsonrpc|DBG|unix:/run/openvswitch/db.sock:
> received notification, method="update3",
> params=[["monid","Open_vSwitch"],"00000000-0000-0000-0000-000000000000",{"QoS":{"f5c98735-3fe9-433a-ae1e-2dbf7348a5eb":{"insert":{"queues":["map",[[0,["uuid","8f06a0de-63ea-4a19-8b96-9d6ea8bb4a57"]]]],"other_config":["map",[["max-rate","34359738367"]]],"type":"linux-htb"}}},"Open_vSwitch":{"71530140-ff7c-4793-8476-d1ca36517c85":{"modify":{"next_cfg":182}}}}]
> 

Hi.  Thanks for the report!
Though all messages above are just debug messages and do not really
show any actual issues, I think you're facing the same issue as
described here:
  https://bugzilla.redhat.com/2138339

The workaround would be to manually remove the qdisc from the
interface before setting up the QoS in OVS.  But since it is
created by libvirt, it might be hard to do.  If it is possible to
tell libvirt to not set custom noqueue qdisc on the interface,
that will also work.

I'll work on the patch to fix the problem from the OVS side.
The change will be similar to what proposed in the bugzilla
with some adjustments to not fail deletion on ENOENT, otherwise
OVS will fail to set up QoS or ports that do not have any
custom qdisc.

Best regards, Ilya Maximets.

> 
> сб, 29 окт. 2022 г. в 19:21, Alex Weasel <weaselcloud.a...@gmail.com>:
>>
>> Hello mates
>> This is another day i'm trying to resolve this issue related to ingress QoS.
>>
>> What you did that make the problem appear.
>> We using OpenStack Neutron + OVS to add connectivity for our instances
>> and we have to add QoS which handled by OVS. Our actions are to create
>> QoS policy with ingress/egress rules with type bandwidth-limit but
>> ingress QoS isn't working at all. Looking back to our normally working
>> cluster i've noticed about linux interface options difference. In
>> normally working environment all of our instances interfaces have next
>> options:
>> tap15614faf-92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb
>> master ovs-system
>> # tc qdisc show dev tap15614faf-92
>> qdisc htb 1: root refcnt 2 r2q 10 default 0x1 direct_packets_stat 0
>> direct_qlen 1000
>> qdisc ingress ffff: parent ffff:fff1 ----------------
>>
>> # tc class show dev tap15614faf-92
>> class htb 1:1 parent 1:fffe prio 0 rate 12Kbit ceil 102400Kbit burst
>> 1563b cburst 1561b
>> class htb 1:fffe root rate 102400Kbit ceil 102400Kbit burst 1497b cburst 
>> 1497b
>>
>> But another cluster have issue with noqueue option assigned to
>> instance interfaces
>> tapf3690fb2-dd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue master ovs-system
>> # tc qdisc show dev tapf3690fb2-dd
>> qdisc noqueue 8004: root refcnt 2
>> qdisc ingress ffff: parent ffff:fff1 ----------------
>>
>> # tc class show dev tapf3690fb2-dd (output is empty)
>> Digging deeper we started to look logs for some errors and we got
>> some. When we trying to anyway edit or create QoS policy or interfaces
>> with that QoS policy we seeing weird logs
>> 2022-10-27T13:09:22.273Z|00077|netdev_linux|DBG|unknown qdisc "noqueue"
>> 2022-10-27T13:17:11.626Z|00078|netdev_linux|DBG|unknown qdisc "noqueue"
>> 2022-10-27T13:20:07.190Z|00080|netdev_linux|DBG|unknown qdisc "noqueue"
>> I suspect that this is somehow is root of the problem why ingress
>> rules aren't working.
>>
>>
>> What you expected to happen.
>> Expected behavior is ingress rules are actually being created without errors.
>>
>> What actually happened.
>> Ingress QoS rules aren't working or even added to tc qdisc/class
>>
>> Some handy detais about environment:
>> # ovs-vswitchd --version
>> ovs-vswitchd (Open vSwitch) 2.17.2
>> # cat /proc/version
>> Linux version 5.4.0-131-generic (also tried 5.4.0-120-generic as on
>> working environment) (buildd@lcy02-amd64-108) (gcc version 9.4.0
>> (Ubuntu 9.4.0-1ubuntu1~20.04.1)) #147-Ubuntu SMP Fri Oct 14 17:07:22
>> UTC 2022
>> ovsdb content located in gist by url
>> https://gist.github.com/frct1/4b26949b97fc88d86471f05b51b68a97
>> # ovs-dpctl show
>> system@ovs-system:
>>   lookups: hit:6731384 missed:162168 lost:16
>>   flows: 233
>>   masks: hit:50191740 total:18 hit/pkt:7.28
>>   port 0: ovs-system (internal)
>>   port 1: br-ex (internal)
>>   port 2: eno49
>>   port 3: br-int (internal)
>>   port 4: br-tun (internal)
>>   port 5: tapd405d07c-e3 (internal)
>>   port 6: tapf3690fb2-dd
>> # ovs-ofctl show br-int
>> OFPT_FEATURES_REPLY (xid=0x2): dpid:00001e808363c444
>> n_tables:254, n_buffers:0
>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
>> mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
>> mod_tp_dst
>>  1(int-br-ex): addr:d6:66:17:71:72:ae
>>      config:     0
>>      state:      0
>>      speed: 0 Mbps now, 0 Mbps max
>>  2(patch-tun): addr:96:ed:2b:0e:1a:9f
>>      config:     0
>>      state:      0
>>      speed: 0 Mbps now, 0 Mbps max
>>  3(tapd405d07c-e3): addr:00:00:00:00:00:00
>>      config:     PORT_DOWN
>>      state:      LINK_DOWN
>>      speed: 0 Mbps now, 0 Mbps max
>>  7(tapf3690fb2-dd): addr:fe:16:3e:7d:e8:bc
>>      config:     0
>>      state:      0
>>      current:    10MB-FD COPPER
>>      speed: 10 Mbps now, 0 Mbps max
>>  LOCAL(br-int): addr:1e:80:83:63:c4:44
>>      config:     PORT_DOWN
>>      state:      LINK_DOWN
>>      speed: 0 Mbps now, 0 Mbps max
>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>>
>> # ovs-ofctl show br-ex
>> OFPT_FEATURES_REPLY (xid=0x2): dpid:00003ca82a2450b8
>> n_tables:254, n_buffers:0
>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
>> mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
>> mod_tp_dst
>>  1(eno49): addr:3c:a8:2a:24:50:b8
>>      config:     0
>>      state:      0
>>      current:    10GB-FD
>>      advertised: 10GB-FD FIBER
>>      supported:  1GB-FD 10GB-FD FIBER AUTO_PAUSE AUTO_PAUSE_ASYM
>>      speed: 10000 Mbps now, 10000 Mbps max
>>  2(phy-br-ex): addr:2e:b2:b5:ec:b7:95
>>      config:     0
>>      state:      0
>>      speed: 0 Mbps now, 0 Mbps max
>>  LOCAL(br-ex): addr:3c:a8:2a:24:50:b8
>>      config:     0
>>      state:      0
>>      speed: 0 Mbps now, 0 Mbps max
>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>>
>> Thank you very much for any possible suggestions or workarounds!
> _______________________________________________
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to