Public bug reported:

Hello.

I am using OVN for my system. I have ovs_rcu(urcu11)|WARN|blocked xxxx
ms waiting for handler1 to quiesce when apply qos policy for networks.

How to reproduce this bug:

Create 2 instances: one uses a VLAN IP, and the other uses Geneve(with
floating IP), both same subnet

Run iperf -s on the first instance and run iperf -c <first instance IP>
-t 360 on the second to connect to it.

Create a network QoS policy:
openstack network qos policy create --project 5d895a6d62d241e495f9e29c3b04644b 
bw-limiter
openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --egress bw-limiter
openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --ingress bw-limiter
openstack network set --qos-policy bw-limiter mynetwork

Perform a hard reboot of the second instance, then check the logs on the
compute node hosting it:

tail -f /var/log/kolla/openvswitch/ovs-vswitchd.log

You will see logs like:

2025-04-30T07:53:42.726Z|00002|ovs_rcu(urcu11)|WARN|blocked 2000 ms waiting for 
handler1 to quiesce
2025-04-30T07:53:43.037Z|00132|ovs_rcu|WARN|blocked 2000 ms waiting for 
handler1 to quiesce
2025-04-30T07:53:44.726Z|00003|ovs_rcu(urcu11)|WARN|blocked 4000 ms waiting for 
handler1 to quiesce
...
Environment:

Ubuntu 24.04

OpenStack 2025.1

Deployment tool: Kolla-Ansible (2025 master branch)

** Affects: neutron
     Importance: Undecided
         Status: New

** Description changed:

  Hello.
  
  I am using OVN for my system. I have ovs_rcu(urcu11)|WARN|blocked xxxx
- ms waiting for handler1 to quiesce when apply qos policy for network
+ ms waiting for handler1 to quiesce when apply qos policy for networks.
  
  How to reproduce this bug:
  
  Create 2 instances: one uses a VLAN IP, and the other uses Geneve(with
  floating IP), both same subnet
  
  Run iperf -s on the first instance and run iperf -c <first instance IP>
  -t 360 on the second to connect to it.
  
  Create a network QoS policy:
  openstack network qos policy create --project 
5d895a6d62d241e495f9e29c3b04644b bw-limiter
  openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --egress bw-limiter
  openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --ingress bw-limiter
  openstack network set --qos-policy bw-limiter mynetwork
  
  Perform a hard reboot of the second instance, then check the logs on the
  compute node hosting it:
  
  tail -f /var/log/kolla/openvswitch/ovs-vswitchd.log
  
  You will see logs like:
  
  2025-04-30T07:53:42.726Z|00002|ovs_rcu(urcu11)|WARN|blocked 2000 ms waiting 
for handler1 to quiesce
  2025-04-30T07:53:43.037Z|00132|ovs_rcu|WARN|blocked 2000 ms waiting for 
handler1 to quiesce
  2025-04-30T07:53:44.726Z|00003|ovs_rcu(urcu11)|WARN|blocked 4000 ms waiting 
for handler1 to quiesce
  ...
  Environment:
  
  Ubuntu 24.04
  
  OpenStack 2025.1
  
  Deployment tool: Kolla-Ansible (2025 master branch)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2109676

Title:
  ovs_rcu(urcu11)|WARN|blocked xxxx ms waiting for handler1 to quiesce

Status in neutron:
  New

Bug description:
  Hello.

  I am using OVN for my system. I have ovs_rcu(urcu11)|WARN|blocked xxxx
  ms waiting for handler1 to quiesce when apply qos policy for networks.

  How to reproduce this bug:

  Create 2 instances: one uses a VLAN IP, and the other uses Geneve(with
  floating IP), both same subnet

  Run iperf -s on the first instance and run iperf -c <first instance
  IP> -t 360 on the second to connect to it.

  Create a network QoS policy:
  openstack network qos policy create --project 
5d895a6d62d241e495f9e29c3b04644b bw-limiter
  openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --egress bw-limiter
  openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --ingress bw-limiter
  openstack network set --qos-policy bw-limiter mynetwork

  Perform a hard reboot of the second instance, then check the logs on
  the compute node hosting it:

  tail -f /var/log/kolla/openvswitch/ovs-vswitchd.log

  You will see logs like:

  2025-04-30T07:53:42.726Z|00002|ovs_rcu(urcu11)|WARN|blocked 2000 ms waiting 
for handler1 to quiesce
  2025-04-30T07:53:43.037Z|00132|ovs_rcu|WARN|blocked 2000 ms waiting for 
handler1 to quiesce
  2025-04-30T07:53:44.726Z|00003|ovs_rcu(urcu11)|WARN|blocked 4000 ms waiting 
for handler1 to quiesce
  ...
  Environment:

  Ubuntu 24.04

  OpenStack 2025.1

  Deployment tool: Kolla-Ansible (2025 master branch)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2109676/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to