[Yahoo-eng-team] [Bug 2084538] [NEW] FIP port has an IPv6 fixed_ip

2024-10-15 Thread Liu Xie
Public bug reported:

A port whose device_owner is floatingip has an IPv6 fixed_ip.
This is an old issue, but I haven't focused on it.

Steps to reproduce:
1.create an external network:
openstack network create --provider-network-type vlan 
--provider-physical-network physnet1 --external pu

2.create an ipv4 subnet:
openstack subnet create --network pu --subnet-range 172.133.10.0/24 pu

3.create an ipv6 subnet:
openstack subnet create --network pu --ip-version 6 --subnet-range 
2011::/64 --dhcp --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful pu

4.create an floating ip with the external network:
neutron floatingip-create pu

Created a new floatingip:
+-+--+
| Field   | Value|
+-+--+
| created_at  | 2024-10-15T03:05:20Z |
| description |  |
| dns_domain  |  |
| dns_name|  |
| extra_fields| {}   |
| fixed_ip_address|  |
| floating_ip_address | 172.133.10.43|
| floating_network_id | 63b02c2d-007b-4096-8923-f63a8c993b7b |
| id  | 9fb736b4-a0c3-429c-a59b-f82450d496e4 |
| port_details|  |
| port_id |  |
| project_id  | d2ee4ab6e0344563812cabb655aeb9c3 |
| qos_policy_id   |  |
| revision_number | 0|
| router_id   |  |
| status  | DOWN |
| tags|  |
| tenant_id   | d2ee4ab6e0344563812cabb655aeb9c3 |
| updated_at  | 2024-10-15T03:05:20Z |
+-+--+

5.we can find the floating ip port has an ipv6 address:

 neutron port-show 77e4caa2-2953-48d2-bbfe-9d658de42a35
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   |   
   |
| binding:profile   | {}
   |
| binding:vif_details   | {}
   |
| binding:vif_type  | unbound   
   |
| binding:vnic_type | normal
   |
| created_at| 2024-10-15T03:05:20Z  
   |
| description   |   
   |
| device_id | 9fb736b4-a0c3-429c-a59b-f82450d496e4  
   |
| device_owner  | network:floatingip
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {"subnet_id": "5a70bc02-7017-4aa6-a879-4cb4a9b19ccf", 
"ip_address": "172.133.10.43"} |
|   | {"subnet_id": "6853c78f-8083-40c5-ac8a-0a76a9a1261d", 
"ip_address": "2011::a9"}  |
| id| 77e4caa2-2953-48d2-bbfe-9d658de42a35  
   |
| mac_address   | fa:16:3e:5f:9a:48 
   |
| name  |   
   |
| network_id| 63b02c2d-007b-4096-8923-f63a8c993b7b  
   |
| port_security_enabled | False 
   |
| project_id| d2ee4ab6e0344563812cabb655aeb9c3  
   |
| qos_network_policy_id |   
   |

[Yahoo-eng-team] [Bug 1939374] [NEW] [OVN] Support baremetal type vnic

2021-08-09 Thread Liu Xie
Public bug reported:

As the title describes, we can support baremetal type vnic like sriov
vnic.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939374

Title:
  [OVN] Support baremetal type vnic

Status in neutron:
  New

Bug description:
  As the title describes, we can support baremetal type vnic like sriov
  vnic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1939374/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946589] [NEW] [OVN] localport might not be updated when create multiple subnets for its network

2021-10-10 Thread Liu Xie
Public bug reported:

When create a subnet for a specific network, ovn_client would update 
external_ids of metadata port.
We focus on these fileds of localport port: 'neutron:cidrs' and 'mac', because 
those fields contain fixed_ips.

But any scenarios like batch create multiple subnets for one network,
its 'neutron:cidrs' and 'mac' might not updated.


metadata port info:
 neutron port-show 15f0d39b-445b-4b19-a32f-6db8136871de -c device_owner -c 
fixed_ips
+--++
| Field| Value  
|
+--++
| device_owner | network:distributed
|
| fixed_ips| {"subnet_id": "373254b2-6791-4fe2-8038-30e91a9e9c8d", 
"ip_address": "192.168.0.2"} |
|  | {"subnet_id": "d0d871af-158e-4f45-8af2-92f2058521a3", 
"ip_address": "192.168.1.2"} |
|  | {"subnet_id": "eeac857f-ab0e-438f-9fa6-2ae0cd3de41a", 
"ip_address": "192.168.2.2"} |
+--++

localport port info:
_uuid   : 2e17ffa7-f501-49e5-97ce-9a8731e60699
chassis : []
datapath: cfebe6fc-52fc-43ec-a25b-73d30abe4d00
encap   : []
external_ids: {"neutron:cidrs"="192.168.2.2/24", 
"neutron:device_id"=ovnmeta-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:device_owner"="network:distributed", 
"neutron:network_name"=neutron-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:port_name"="", "neutron:project_id"=fc5ea82972ce42499ddc18bc4733eaab, 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
gateway_chassis : []
ha_chassis_group: []
logical_port: "15f0d39b-445b-4b19-a32f-6db8136871de"
mac : ["fa:16:3e:e2:60:18 192.168.2.2"]
nat_addresses   : []
options : {requested-chassis=""}
parent_port : []
tag : []
tunnel_key  : 1
type: localport
up  : false
virtual_parent  : []

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- [OVN] localport might not be updated its  external_ids when create multiple 
subnets for its network
+ [OVN] localport might not be updated when create multiple subnets for its 
network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946589

Title:
  [OVN] localport might not be updated when create multiple subnets for
  its network

Status in neutron:
  New

Bug description:
  When create a subnet for a specific network, ovn_client would update 
external_ids of metadata port.
  We focus on these fileds of localport port: 'neutron:cidrs' and 'mac', 
because those fields contain fixed_ips.

  But any scenarios like batch create multiple subnets for one network,
  its 'neutron:cidrs' and 'mac' might not updated.

  
  metadata port info:
   neutron port-show 15f0d39b-445b-4b19-a32f-6db8136871de -c device_owner -c 
fixed_ips
  
+--++
  | Field| Value
  |
  
+--++
  | device_owner | network:distributed  
  |
  | fixed_ips| {"subnet_id": "373254b2-6791-4fe2-8038-30e91a9e9c8d", 
"ip_address": "192.168.0.2"} |
  |  | {"subnet_id": "d0d871af-158e-4f45-8af2-92f2058521a3", 
"ip_address": "192.168.1.2"} |
  |  | {"subnet_id": "eeac857f-ab0e-438f-9fa6-2ae0cd3de41a", 
"ip_address": "192.168.2.2"} |
  
+--++

  localport port info:
  _uuid   : 2e17ffa7-f501-49e5-97ce-9a8731e60699
  chassis : []
  datapath: cfebe6fc-52fc-43ec-a25b-73d30abe4d00
  encap   : []
  external_ids: {"neutron:cidrs"="192.168.2.2/24", 
"neutron:device_id"=ovnmeta-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:device_owner"="network:distributed", 
"neutron:network_name"=neutron-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:port_name"="", "neutron:project_id"=fc5ea82972ce42499ddc18bc4733eaab, 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
  gateway_chassis : []
  ha_chassis_group: []
  logical_port: "15f0d39b-445b-4b19-a32f-6db8136871de"
  mac : ["fa:16:3e:e2:60:18 192.168.2.2"]
  nat_addresses   : []
  options : {requested-chassis=""}
  parent_port : []
  

[Yahoo-eng-team] [Bug 1946713] [NEW] [ovn]Network's availability_zones is empty

2021-10-11 Thread Liu Xie
Public bug reported:

Ovn driver is not support availability_zones earlier because ovn
distribute dhcp server.

Through this patch [1], neutron already support for Network Availability
Zones in ML2/OVN. But network's availability_zones is also empty.

neutron net-show ce361094-dcfa-42e7-a4b9-b4adf2223341 -c 
availability_zone_hints -c availability_zones
+-++
| Field   | Value  |
+-++
| availability_zone_hints | default-az |
| availability_zones  ||
+-++

[1]https://review.opendev.org/c/openstack/neutron/+/762550/

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Ovn driver is not support availability_zones earlier because ovn
  distribute dhcp server.
  
  Through this patch [1], neutron already support for Network Availability
  Zones in ML2/OVN. But network's availability_zones is also empty.
  
  neutron net-show ce361094-dcfa-42e7-a4b9-b4adf2223341 -c 
availability_zone_hints -c availability_zones
  +-++
  | Field   | Value  |
  +-++
  | availability_zone_hints | default-az |
  | availability_zones  ||
  +-++
+ 
+ [1]https://review.opendev.org/c/openstack/neutron/+/762550/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946713

Title:
  [ovn]Network's availability_zones is empty

Status in neutron:
  New

Bug description:
  Ovn driver is not support availability_zones earlier because ovn
  distribute dhcp server.

  Through this patch [1], neutron already support for Network
  Availability Zones in ML2/OVN. But network's availability_zones is
  also empty.

  neutron net-show ce361094-dcfa-42e7-a4b9-b4adf2223341 -c 
availability_zone_hints -c availability_zones
  +-++
  | Field   | Value  |
  +-++
  | availability_zone_hints | default-az |
  | availability_zones  ||
  +-++

  [1]https://review.opendev.org/c/openstack/neutron/+/762550/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946713/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946764] [NEW] [OVN]Any dhcp options which are string type should be escape

2021-10-12 Thread Liu Xie
Public bug reported:


2021-10-12T14:11:45Z|290912|lflow|WARN|error parsing actions "reg0[3] = 
put_dhcp_opts(offerip = 192.168.100.243, bootfile_name = 
https://127.0.0.1/boot.ipxe, classless_static_route = 
{169.254.169.254/32,192.168.100.2, 0.0.0.0/0,192.168.100.1}, dns_server = 
{10.222.0.3}, lease_time = 43200, mtu = 1442, netmask = 255.255.255.0, 
path_prefix = /var/lib/ark/tftpboot, router = 192.168.100.1, server_id = 
192.168.100.1, tftp_server = 192.168.101.10, tftp_server_address = 
192.168.101.10); next;": Syntax error at `https' expecting constant.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946764

Title:
  [OVN]Any dhcp options which are string type should be escape

Status in neutron:
  New

Bug description:
  
  2021-10-12T14:11:45Z|290912|lflow|WARN|error parsing actions "reg0[3] = 
put_dhcp_opts(offerip = 192.168.100.243, bootfile_name = 
https://127.0.0.1/boot.ipxe, classless_static_route = 
{169.254.169.254/32,192.168.100.2, 0.0.0.0/0,192.168.100.1}, dns_server = 
{10.222.0.3}, lease_time = 43200, mtu = 1442, netmask = 255.255.255.0, 
path_prefix = /var/lib/ark/tftpboot, router = 192.168.100.1, server_id = 
192.168.100.1, tftp_server = 192.168.101.10, tftp_server_address = 
192.168.101.10); next;": Syntax error at `https' expecting constant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946764/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1948447] [NEW] [sriov]The version of pyroute2 should be upgrade in requirement.txt

2021-10-22 Thread Liu Xie
Public bug reported:

Through this bug[1], we should upgrade the version of pyroute2 to 0.6.5
in  requirement.txt.

[1]https://bugs.launchpad.net/ubuntu/+source/pyroute2/+bug/1904730

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: sriov

** Summary changed:

- The version of pyroute2 should be upgrade in requirement.txt
+ [sriov]The version of pyroute2 should be upgrade in requirement.txt

** Tags added: sriov

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1948447

Title:
  [sriov]The version of pyroute2 should be upgrade in requirement.txt

Status in neutron:
  New

Bug description:
  Through this bug[1], we should upgrade the version of pyroute2 to
  0.6.5 in  requirement.txt.

  [1]https://bugs.launchpad.net/ubuntu/+source/pyroute2/+bug/1904730

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1948447/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1954771] [NEW] [OVN]MAC of lrp has not been updated in MAC_Binding when it re-associated

2021-12-14 Thread Liu Xie
Public bug reported:

With this patch[1], we avoid pre-populating flows for router to router
communication.

If we re-associated router interface with router, its mac has been
changed in neutron db and fixed_ip not, but we found that mac of lrp has
not been updated in MAC_Binding, vm also could not visit public network.

[1]https://review.opendev.org/c/openstack/neutron/+/814421

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- [OVN]mac of lrp has not been updated in MAC_Binding when it re-associated
+ [OVN]MAC of lrp has not been updated in MAC_Binding when it re-associated

** Description changed:

  With this patch[1], we avoid pre-populating flows for router to router
  communication.
  
  If we re-associated router interface with router, its mac has been
- changed in neutron db, but we found that mac of lrp has not been updated
- in MAC_Binding, vm also could not visit public network.
+ changed in neutron db and fixed_ip not, but we found that mac of lrp has
+ not been updated in MAC_Binding, vm also could not visit public network.
  
  [1]https://review.opendev.org/c/openstack/neutron/+/814421

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1954771

Title:
  [OVN]MAC of lrp has not been updated in MAC_Binding when it re-
  associated

Status in neutron:
  New

Bug description:
  With this patch[1], we avoid pre-populating flows for router to router
  communication.

  If we re-associated router interface with router, its mac has been
  changed in neutron db and fixed_ip not, but we found that mac of lrp
  has not been updated in MAC_Binding, vm also could not visit public
  network.

  [1]https://review.opendev.org/c/openstack/neutron/+/814421

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1954771/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958355] [NEW] [OVN][SRIOV] Not support create port which is type of external_port backed geneve network

2022-01-19 Thread Liu Xie
Public bug reported:

The external_port only supported where network is vlan type, not geneve
network. So we should return it when create a sriov nic backed tunnel
net.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1958355

Title:
  [OVN][SRIOV] Not support create port which is type of external_port
  backed geneve network

Status in neutron:
  New

Bug description:
  The external_port only supported where network is vlan type, not
  geneve network. So we should return it when create a sriov nic backed
  tunnel net.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1958355/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968269] [NEW] [router qos]qos binding is not clear after remove gateway

2022-04-08 Thread Liu Xie
Public bug reported:

As the title describes, after remove gateway for a router, we could find the 
qos binding is remain.
And if we delete this qos policy, we could meet the error of QosPolicyInUse.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968269

Title:
  [router qos]qos binding is not clear after remove gateway

Status in neutron:
  New

Bug description:
  As the title describes, after remove gateway for a router, we could find the 
qos binding is remain.
  And if we delete this qos policy, we could meet the error of QosPolicyInUse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968269/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928164] Re: [OVN] Ovn-controller dose not update the flows table when localport tap device is rebuilt

2023-04-24 Thread Liu Xie
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928164

Title:
  [OVN] Ovn-controller dose not update the flows table when  localport
  tap device  is rebuilt

Status in neutron:
  Invalid

Bug description:
  After all vm using network A on HV are deleted, the tap device of the network 
A is also deleted.
  After the vm is re-created on this HV using the network A, the tap device 
will be rebuilt.
  At this point, the flows table has not been updated and the vm's traffic 
cannot reach localport. Restore by restarting ovn-controller.

  ovn version as:
  # ovn-controller --version
  ovn-controller 21.03.0
  Open vSwitch Library 2.15.90
  OpenFlow versions 0x6:0x6
  SB DB Schema 20.16.1

  Trace by ovn-trace is normal:

  ()[root@ovn-ovsdb-sb-1 /]# ovn-trace --summary 
5f79485f-682c-434a-8202-f6658fa30076 'inport == 
"643e3bc7-0b44-4929-8c4d-ec63f19097f8" && eth.src == fa:16:3e:55:4a:8f && 
ip4.src == 192.168.222.168 &&  eth.dst == fa:16:3e:4a:d6:bc &&  ip4.dst == 
169.254.169.254 && ip.ttl == 32'
  # 
ip,reg14=0x16,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=32
  ingress(dp="jufeng", inport="instance-h0NdYw_jufeng_3fe45e35") {
  next;
  next;
  reg0[0] = 1;
  next;
  ct_next;
  ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
  reg0[8] = 1;
  reg0[10] = 1;
  next;
  next;
  outport = "8b01e3";
  output;
  egress(dp="jufeng", inport="instance-h0NdYw_jufeng_3fe45e35", 
outport="8b01e3") {
  reg0[0] = 1;
  next;
  ct_next;
  ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
  reg0[8] = 1;
  reg0[10] = 1;
  next;
  output;
  /* output to "8b01e3", type "localport" */;
  };
  };
  };
  };

  
  Trace by flows is not normal:

  # ovs-appctl ofproto/trace br-int 
in_port=33,tcp,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,tp_dst=80
  Flow: 
tcp,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

  bridge("br-int")
  
   0. in_port=33, priority 100, cookie 0xebbdd9f7
  set_field:0x19->reg13
  set_field:0x5->reg11
  set_field:0x3->reg12
  set_field:0x3->metadata
  set_field:0x16->reg14
  resubmit(,8)
   8. reg14=0x16,metadata=0x3,dl_src=fa:16:3e:55:4a:8f, priority 50, cookie 
0x1a73fa10
  resubmit(,9)
   9. 
ip,reg14=0x16,metadata=0x3,dl_src=fa:16:3e:55:4a:8f,nw_src=192.168.222.168, 
priority 90, cookie 0x2690070f
  resubmit(,10)
  10. metadata=0x3, priority 0, cookie 0x4f77990b
  resubmit(,11)
  11. metadata=0x3, priority 0, cookie 0xd7e42894
  resubmit(,12)
  12. metadata=0x3, priority 0, cookie 0xa5400341
  resubmit(,13)
  13. ip,metadata=0x3, priority 100, cookie 0x510177c2
  set_field:0x1/0x1->xxreg0
  resubmit(,14)
  14. metadata=0x3, priority 0, cookie 0x5505c270
  resubmit(,15)
  15. ip,reg0=0x1/0x1,metadata=0x3, priority 100, cookie 0xf2eaa3a5
  ct(table=16,zone=NXM_NX_REG13[0..15])
  drop
   -> A clone of the packet is forked to recirculate. The forked pipeline 
will be resumed at table 16.
   -> Sets the packet to an untracked state, and clears all the conntrack 
fields.

  Final flow: 
tcp,reg0=0x1,reg11=0x5,reg12=0x3,reg13=0x19,reg14=0x16,metadata=0x3,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
  Megaflow: 
recirc_id=0,eth,tcp,in_port=33,vlan_tci=0x/0x1fff,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=128.0.0.0/2,nw_frag=no
  Datapath actions: ct(zone=25),recirc(0xe8)

  
===
  recirc(0xe8) - resume conntrack with default ct_state=trk|new (use --ct-next 
to customize)
  
===

  Flow:
  
recirc_id=0xe8,ct_state=new|trk,ct_zone=25,eth,tcp,reg0=0x1,reg11=0x5,reg12=0x3,reg13=0x19,reg14=0x16,metadata=0x3,in_port=33,vlan_tci=0x,dl_src=fa:16:3e:55:4a:8f,dl_dst=fa:16:3e:4a:d6:bc,nw_src=192.168.222.168,nw_dst=169.254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0

  bridge("br-int")
  
  thaw
  Resuming from table 16
  16. ct_state=+new-est+trk,metadata=0x3, priority 7, cookie 0x6d37a2c
  
set_field:0x800

[Yahoo-eng-team] [Bug 2025055] [NEW] [rfe][ml2] Add a new API that supports cloning a specified security group

2023-06-26 Thread Liu Xie
Public bug reported:

Hi everyone:
  We want to define a new api that supports cloning a specified security_group.
  Consider the following case:
  If the user wants to create a new security_group with the same rules as a 
created security_group, he should do some duplicate actions to create rules.

  It looks expensive, so that we want to define a new API that supports
create a new security_group and automatically copy the rules from the
specified security_group.

API likes:
PUT  /v2.0/security-groups/{security_group_id}/clone

{
"security_group": {
"name": "newsecgroup",
"description": "cloning security group from test",
"stateful": true
}
}

Does anyone have other ideas?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ml2

** Description changed:

  Hi everyone:
-   We want to define a new api that supports cloning a specified 
security_group.
-   Consider the following case: 
-   If the user wants to create a new security_group with the same rules as a 
created security_group, he should do some duplicate actions to create rules.
+   We want to define a new api that supports cloning a specified 
security_group.
+   Consider the following case:
+   If the user wants to create a new security_group with the same rules as a 
created security_group, he should do some duplicate actions to create rules.
  
- It looks expensive, so that we want to define a new API that supports
+   It looks expensive, so that we want to define a new API that supports
  create a new security_group and automatically copy the rules from the
  specified security_group.
  
  API likes:
  PUT  /v2.0/security-groups/{security_group_id}/clone
  
  {
- "security_group": {
- "name": "newsecgroup",
- "description": "cloning security group from test",
- "stateful": true
- }
+ "security_group": {
+ "name": "newsecgroup",
+ "description": "cloning security group from test",
+ "stateful": true
+ }
  }
  
  Does anyone have other ideas?

** Summary changed:

- [ml2] Add a new API that supports cloning a specified security group
+ [rfe][ml2] Add a new API that supports cloning a specified security group

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025055

Title:
  [rfe][ml2] Add a new API that supports cloning a specified security
  group

Status in neutron:
  New

Bug description:
  Hi everyone:
    We want to define a new api that supports cloning a specified 
security_group.
    Consider the following case:
    If the user wants to create a new security_group with the same rules as a 
created security_group, he should do some duplicate actions to create rules.

It looks expensive, so that we want to define a new API that
  supports create a new security_group and automatically copy the rules
  from the specified security_group.

  API likes:
  PUT  /v2.0/security-groups/{security_group_id}/clone

  {
  "security_group": {
  "name": "newsecgroup",
  "description": "cloning security group from test",
  "stateful": true
  }
  }

  Does anyone have other ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025055/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028660] [NEW] [fwaas][rfe] support list type of port_range for firewall rule

2023-07-25 Thread Liu Xie
Public bug reported:

In some cases, customers want to specify a port list for 'source_port'(or 
destination_port) when create a firewall rule.
Currently the api of firewall_rule could not meet the demand. So we want to 
define a new api which could support it.

eg. api :
"firewall_rule":{
"name": "test_rule",
"protocol": "tcp",
"source_port": ["22","23","30:80"]
}

anyone has other ideas?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028660

Title:
  [fwaas][rfe] support list type of port_range for firewall rule

Status in neutron:
  New

Bug description:
  In some cases, customers want to specify a port list for 'source_port'(or 
destination_port) when create a firewall rule.
  Currently the api of firewall_rule could not meet the demand. So we want to 
define a new api which could support it.

  eg. api :
  "firewall_rule":{
  "name": "test_rule",
  "protocol": "tcp",
  "source_port": ["22","23","30:80"]
  }

  anyone has other ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2028660/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2040242] [NEW] [ip allocation_pools] Why force first ip < (subnet.first + 1) if version of subnet is ipv6

2023-10-24 Thread Liu Xie
Public bug reported:

As we know, we can use ipv6 address end with '0' like 2001::.

But when we allocate ipv6 pool use neutron, we could find the error like 
follows:
neutron net-create net-v6
neutron subnet-create --ip-version 6 --allocation-pool start=2001::,end=2001::2 
net-v6 2001::/64
The allocation pool 2001::-2001::2 spans beyond the subnet cidr 2001::/64.
Neutron server returns request_ids: ['req-9a6569ed-52d7-4c3f-ad7e-8986a041a347']

We found that the error info from the func 'validate_allocation_pools':

else:  # IPv6 case
subnet_first_ip = netaddr.IPAddress(subnet.first + 1)
subnet_last_ip = netaddr.IPAddress(subnet.last)

LOG.debug("Performing IP validity checks on allocation pools")
ip_sets = []
for ip_pool in ip_pools:
start_ip = netaddr.IPAddress(ip_pool.first, ip_pool.version)
end_ip = netaddr.IPAddress(ip_pool.last, ip_pool.version)
if (start_ip.version != subnet.version or
end_ip.version != subnet.version):
LOG.info("Specified IP addresses do not match "
 "the subnet IP version")
raise exc.InvalidAllocationPool(pool=ip_pool)
if start_ip < subnet_first_ip or end_ip > subnet_last_ip:
LOG.info("Found pool larger than subnet "
 "CIDR:%(start)s - %(end)s",
 {'start': start_ip, 'end': end_ip})
raise exc.OutOfBoundsAllocationPool(
pool=ip_pool,
subnet_cidr=subnet_cidr)

Why neutron ipam force first ip of one pool < (subnet.first + 1) if
version of subnet is ipv6 ?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2040242

Title:
  [ip allocation_pools] Why  force first ip < (subnet.first + 1) if
  version of subnet is ipv6

Status in neutron:
  New

Bug description:
  As we know, we can use ipv6 address end with '0' like 2001::.

  But when we allocate ipv6 pool use neutron, we could find the error like 
follows:
  neutron net-create net-v6
  neutron subnet-create --ip-version 6 --allocation-pool 
start=2001::,end=2001::2 net-v6 2001::/64
  The allocation pool 2001::-2001::2 spans beyond the subnet cidr 2001::/64.
  Neutron server returns request_ids: 
['req-9a6569ed-52d7-4c3f-ad7e-8986a041a347']

  We found that the error info from the func
  'validate_allocation_pools':

  else:  # IPv6 case
  subnet_first_ip = netaddr.IPAddress(subnet.first + 1)
  subnet_last_ip = netaddr.IPAddress(subnet.last)

  LOG.debug("Performing IP validity checks on allocation pools")
  ip_sets = []
  for ip_pool in ip_pools:
  start_ip = netaddr.IPAddress(ip_pool.first, ip_pool.version)
  end_ip = netaddr.IPAddress(ip_pool.last, ip_pool.version)
  if (start_ip.version != subnet.version or
  end_ip.version != subnet.version):
  LOG.info("Specified IP addresses do not match "
   "the subnet IP version")
  raise exc.InvalidAllocationPool(pool=ip_pool)
  if start_ip < subnet_first_ip or end_ip > subnet_last_ip:
  LOG.info("Found pool larger than subnet "
   "CIDR:%(start)s - %(end)s",
   {'start': start_ip, 'end': end_ip})
  raise exc.OutOfBoundsAllocationPool(
  pool=ip_pool,
  subnet_cidr=subnet_cidr)

  Why neutron ipam force first ip of one pool < (subnet.first + 1) if
  version of subnet is ipv6 ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2040242/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2040457] [NEW] [allocation_pools] Need re-generation new pools from cidr and gateway-ip after update one subnet with empty 'allocation_pools'

2023-10-25 Thread Liu Xie
Public bug reported:

We found that allocation_pools is empty list after update subnet with empty 
'allocation_pools':
 neutron subnet-show f84d8c24-251c-4a28-83a0-6c7f147c3da1
+---+--+
| Field | Value|
+---+--+
| allocation_pools  |  |
| cidr  | 192.168.123.0/24 |

In my opinion, if we clear the allocation_pools of one subnet, it should
re-generation new allocation_pools by the cidr and gateway-ip of the
subnet.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2040457

Title:
  [allocation_pools] Need re-generation  new pools from cidr and
  gateway-ip after update one subnet with empty 'allocation_pools'

Status in neutron:
  New

Bug description:
  We found that allocation_pools is empty list after update subnet with empty 
'allocation_pools':
   neutron subnet-show f84d8c24-251c-4a28-83a0-6c7f147c3da1
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 192.168.123.0/24 |

  In my opinion, if we clear the allocation_pools of one subnet, it
  should re-generation new allocation_pools by the cidr and gateway-ip
  of the subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2040457/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043759] [NEW] [ovn] no need rpc notifier if no l2 agent

2023-11-16 Thread Liu Xie
Public bug reported:

As the titile describes, there is no need running rpc notifier if no l2 agent 
when use ml2/ovn driver.
Maybe we can provide one boolean flag like 'disable_rpc_notifier' which default 
is 'True', and we can set it 'False' to disable rpc notifier  when use ml2/ovn 
driver.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2043759

Title:
  [ovn] no need rpc notifier if no l2 agent

Status in neutron:
  New

Bug description:
  As the titile describes, there is no need running rpc notifier if no l2 agent 
when use ml2/ovn driver.
  Maybe we can provide one boolean flag like 'disable_rpc_notifier' which 
default is 'True', and we can set it 'False' to disable rpc notifier  when use 
ml2/ovn driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2043759/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045237] [NEW] [bulk] there is no clean operation if bulk create ports fails

2023-11-30 Thread Liu Xie
Public bug reported:

When we use the bulk api to create ports use one subnet e.g., 
'935fe38e-743f-45e5-a646-4ebbbf16ade7', and then we found any errors occurred.
We use sql queries to find diffrent IPs between ipallocations and 
ipamallocations:

ip_address in ipallocations but not in ipamallocations:

MariaDB [neutron]> select count(*) from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7' && ip_address  not in (select 
ip_address from ipamallocations where ipam_subnet_id in (select id from  
ipamsubnets where neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7')) ;
+--+
| count(*) |
+--+
|  873 |
+--+
1 row in set (0.01 sec)

ip_address in ipamallocations but not in ipallocations:

MariaDB [neutron]> select count(*)  from ipamallocations where ipam_subnet_id 
in (select id from  ipamsubnets where 
neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7') && ip_address not in 
(select ip_address from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7');
+--+
| count(*) |
+--+
|   63 |
+--+
1 row in set (0.01 sec)

It seems that there are still resources remaining if ports creation
fails, and we cannot find the operation to clean up failed ports.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045237

Title:
  [bulk] there is no  clean operation if bulk create ports fails

Status in neutron:
  New

Bug description:
  When we use the bulk api to create ports use one subnet e.g., 
'935fe38e-743f-45e5-a646-4ebbbf16ade7', and then we found any errors occurred.
  We use sql queries to find diffrent IPs between ipallocations and 
ipamallocations:

  ip_address in ipallocations but not in ipamallocations:

  MariaDB [neutron]> select count(*) from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7' && ip_address  not in (select 
ip_address from ipamallocations where ipam_subnet_id in (select id from  
ipamsubnets where neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7')) ;
  +--+
  | count(*) |
  +--+
  |  873 |
  +--+
  1 row in set (0.01 sec)

  ip_address in ipamallocations but not in ipallocations:

  MariaDB [neutron]> select count(*)  from ipamallocations where ipam_subnet_id 
in (select id from  ipamsubnets where 
neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7') && ip_address not in 
(select ip_address from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7');
  +--+
  | count(*) |
  +--+
  |   63 |
  +--+
  1 row in set (0.01 sec)

  It seems that there are still resources remaining if ports creation
  fails, and we cannot find the operation to clean up failed ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045237/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2061350] [NEW] [ovn] The 'ovn_client' of the L3 plugin is occasionally NoneType

2024-04-14 Thread Liu Xie
Public bug reported:

I don't know how to reproduce it, and we have encountered it twice. The
errors are as follows:

2024-04-01 10:53:46.185 252 INFO neutron.wsgi 
[req-3ed42a43-8895-4ad3-b675-af89c6d47631 9862b8ef431c43489ada1bb12f97be59 
c2c9335342874b758aa6af2166fca837 - default default] 192.168.10.5 "GET 
/v2.0/ports?device_id=ae622d1a-416f-4676-b340-be0af8e0626a HTTP/1.1" status: 
200  len: 7306 time: 0.0576138
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource 
[req-e0dcd168-ad4e-43d8-b88c-47393077aa64 b255e17db7c5499c8c8e45e363ae3b47 
12edc48812c94bc3a861bc8bafcf8fa6 - default default] index failed: No details.: 
AttributeError: 'NoneType' object has no attribute 'get_lrouter'
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron/api/v2/resource.py", line 98, 
in resource
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 139, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in 
__exit__
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource 
self.force_reraise()
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource raise self.value
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 135, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_db/api.py", line 154, in wrapper
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in 
__exit__
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource 
self.force_reraise()
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource raise self.value
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_db/api.py", line 142, in wrapper
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 183, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource LOG.debug("Retry 
wrapper got retriable exception: %s", e)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in 
__exit__
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource 
self.force_reraise()
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource raise self.value
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 179, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron/api/v2/base.py", line 369, in 
index
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource return 
self._items(request, True, parent_id)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron/api/v2/base.py", line 304, in 
_items
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 217, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource return 
method(*args, **kwargs)
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python3.6/site-packages/neutron_lib/db/api.py", line 139, in 
wrapped
2024-04-01 10:53:46.184 245 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2024-04-01 10:5

[Yahoo-eng-team] [Bug 2062511] [NEW] [ovn] ovn metadata agent occasionally down when SB connections are error

2024-04-19 Thread Liu Xie
Public bug reported:

If the metadata agent write twice failures to the OVN SB within the
agent_down_time, an alert will be triggered indicating that the agent is
down. Although the SB is snapshoting and quickly recovers thereafter.

Because the "SbGlobalUpdateEvent" is Event driven and it would not retry
after "_update_chassis" falied.

** Affects: neutron
 Importance: Undecided
     Assignee: Liu Xie (liushy)
 Status: New


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

** Description changed:

  If the metadata agent write twice failures to the OVN SB within the
  agent_down_time, an alert will be triggered indicating that the agent is
  down. Although the SB is snapshoting and quickly recovers thereafter.
  
  Because the "SbGlobalUpdateEvent" is Event driven and it would not retry
- after write SB falied.
+ after "_update_chassis" falied.

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062511

Title:
  [ovn] ovn metadata agent occasionally down when SB connections  are
  error

Status in neutron:
  New

Bug description:
  If the metadata agent write twice failures to the OVN SB within the
  agent_down_time, an alert will be triggered indicating that the agent
  is down. Although the SB is snapshoting and quickly recovers
  thereafter.

  Because the "SbGlobalUpdateEvent" is Event driven and it would not
  retry after "_update_chassis" falied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062511/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062536] [NEW] [ovn] agent_health_check does not work for ovn agents

2024-04-19 Thread Liu Xie
Public bug reported:


When "debug" is set to True, some logs show "found 0 active agents" after
the agent_health_check. 
It seems that the agent_health_check mechanism does not work for ovn agents.

** Affects: neutron
 Importance: Undecided
     Assignee: Liu Xie (liushy)
 Status: New


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062536

Title:
  [ovn] agent_health_check does not work for ovn agents

Status in neutron:
  New

Bug description:
  
  When "debug" is set to True, some logs show "found 0 active agents" after
  the agent_health_check. 
  It seems that the agent_health_check mechanism does not work for ovn agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062536/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077596] [NEW] [rfe][fwaas] Add normalized_cidr column to firewall rules

2024-08-21 Thread Liu Xie
Public bug reported:

If we use an invalid CIDR as the source_ip_address, such as
2:3dc2:c893:514a:966b:7969:42b0:00900/108, it can still be successfully
submitted after creating a firewall rule. The main reason is that
netaddr formats this address.

The command is like:

openstack  firewall group rule create --ip-version 6 --source-ip-address
2:3dc2:c893:514a:966b:7969:42b0:00900/108

netaddr would format the CIDR address, and debugging shows:

>>> import netaddr
>>> ii=netaddr.IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:00900/108')
>>> ii
IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:900/108')
>>> ii.version
6

I found a similar issue for security groups, which has a good solution
to fix it[1] . Therefore, I think a fix is also needed for firewall
group rules.

[1]https://bugs.launchpad.net/neutron/+bug/1869129

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Description changed:

  If we use an invalid CIDR as the source_ip_address, such as
  2:3dc2:c893:514a:966b:7969:42b0:00900/108, it can still be successfully
  submitted after creating a firewall rule. The main reason is that
  netaddr formats this address.
  
  The command is like:
  
  openstack  firewall group rule create --ip-version 6 --source-ip-address
  2:3dc2:c893:514a:966b:7969:42b0:00900/108
  
  netaddr would format the CIDR address, and debugging shows:
  
  >>> import netaddr
  >>> ii=netaddr.IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:00900/108')
  >>> ii
  IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:900/108')
  >>> ii.version
  6
  
  I found a similar issue for security groups, which has a good solution
- to fix it . Therefore, I think a fix is also needed for firewall group
- rules.
+ to fix it[1] . Therefore, I think a fix is also needed for firewall
+ group rules.
  
  [1]https://bugs.launchpad.net/neutron/+bug/1869129

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077596

Title:
  [rfe][fwaas] Add normalized_cidr column to firewall rules

Status in neutron:
  New

Bug description:
  If we use an invalid CIDR as the source_ip_address, such as
  2:3dc2:c893:514a:966b:7969:42b0:00900/108, it can still be
  successfully submitted after creating a firewall rule. The main reason
  is that netaddr formats this address.

  The command is like:

  openstack  firewall group rule create --ip-version 6 --source-ip-
  address 2:3dc2:c893:514a:966b:7969:42b0:00900/108

  netaddr would format the CIDR address, and debugging shows:

  >>> import netaddr
  >>> ii=netaddr.IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:00900/108')
  >>> ii
  IPNetwork('2:3dc2:c893:514a:966b:7969:42b0:900/108')
  >>> ii.version
  6

  I found a similar issue for security groups, which has a good solution
  to fix it[1] . Therefore, I think a fix is also needed for firewall
  group rules.

  [1]https://bugs.launchpad.net/neutron/+bug/1869129

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2077596/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1971958] [NEW] [RFE][fwaas][OVN]support l3 firewall for ovn driver

2022-05-06 Thread Liu Xie
Public bug reported:

As neutron-fwaas project is re-maintenance, and ovn become one of the main 
driver for neutron project.
Maybe we could implement l3 firewall for ovn driver.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1971958

Title:
  [RFE][fwaas][OVN]support l3 firewall for ovn driver

Status in neutron:
  New

Bug description:
  As neutron-fwaas project is re-maintenance, and ovn become one of the main 
driver for neutron project.
  Maybe we could implement l3 firewall for ovn driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1971958/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975658] [NEW] [ovn]list_availability_zones is occasionally empty at large scale

2022-05-24 Thread Liu Xie
Public bug reported:

Sometimes, we found that list az is empty at large scale environment.
Maybe it is a matter about ovsdbapp. But it would cause l3 re-schedule to wrong 
candidates if az list is empty when ovsdb_monitor process ChassisEvent. 
 
I have a workaround[1] to fix the matter that l3 re-schedule.

[1]At func _get_availability_zones_from_router_port:
no need to get az list use ml2/ovn mech_driver when schedule gw, only get from 
external_ids of router: 

def _get_availability_zones_from_router_port(self, lrp_name):
"""Return the availability zones hints for the router port.

Return a list of availability zones hints associated with the
router that the router port belongs to.
"""
context = n_context.get_admin_context()
lrp = self._ovn.get_lrouter_port(lrp_name)
router = self.get_router(
context, lrp.external_ids[ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY])
az_hints = utils.get_az_hints(router)
return az_hints

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Tags added: ovn

** Summary changed:

- [ovn]ist_availability_zones is occasionally empty at large scale 
+ [ovn]list_availability_zones is occasionally empty at large scale

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975658

Title:
  [ovn]list_availability_zones is occasionally empty at large scale

Status in neutron:
  New

Bug description:
  Sometimes, we found that list az is empty at large scale environment.
  Maybe it is a matter about ovsdbapp. But it would cause l3 re-schedule to 
wrong candidates if az list is empty when ovsdb_monitor process ChassisEvent. 
   
  I have a workaround[1] to fix the matter that l3 re-schedule.

  [1]At func _get_availability_zones_from_router_port:
  no need to get az list use ml2/ovn mech_driver when schedule gw, only get 
from external_ids of router: 

  def _get_availability_zones_from_router_port(self, lrp_name):
  """Return the availability zones hints for the router port.

  Return a list of availability zones hints associated with the
  router that the router port belongs to.
  """
  context = n_context.get_admin_context()
  lrp = self._ovn.get_lrouter_port(lrp_name)
  router = self.get_router(
  context, lrp.external_ids[ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY])
  az_hints = utils.get_az_hints(router)
  return az_hints

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975658/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1975658] Re: [ovn]list_availability_zones is occasionally empty at large scale

2022-05-24 Thread Liu Xie
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975658

Title:
  [ovn]list_availability_zones is occasionally empty at large scale

Status in neutron:
  Invalid

Bug description:
  Sometimes, we found that return of list az was empty at large scale 
environment.
  Maybe it is a matter about ovsdbapp. But it would cause l3 re-schedule to 
wrong candidates if az list is empty when ovsdb_monitor process ChassisEvent.

  I have a workaround[1] to fix the matter that l3 re-schedule.

  [1]At func _get_availability_zones_from_router_port:
  no need to get az list use ml2/ovn mech_driver when schedule gw, only get 
from external_ids of router:

  def _get_availability_zones_from_router_port(self, lrp_name):
  """Return the availability zones hints for the router port.

  Return a list of availability zones hints associated with the
  router that the router port belongs to.
  """
  context = n_context.get_admin_context()
  lrp = self._ovn.get_lrouter_port(lrp_name)
  router = self.get_router(
  context, lrp.external_ids[ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY])
  az_hints = utils.get_az_hints(router)
  return az_hints

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975658/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982287] [NEW] [ovn] Support address group for ovn driver

2022-07-19 Thread Liu Xie
Public bug reported:

As the title describes, we can use 'address set' of ovn to support the
feature that address group.

** Affects: neutron
 Importance: Undecided
 Assignee: Liu Xie (liushy)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982287

Title:
  [ovn] Support address group for ovn driver

Status in neutron:
  New

Bug description:
  As the title describes, we can use 'address set' of ovn to support the
  feature that address group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982287/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986906] [NEW] [rfe][fwaas]support standard_attrs for firewall_group

2022-08-17 Thread Liu Xie
Public bug reported:

Currently, some users want to konow what time created (or updated) a
specific firewall group but it is not support show fields 'created_at'
nor 'updated_at'.

So we want to support standard_attrs for firewall_group.
Anyone has other ideas?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986906

Title:
  [rfe][fwaas]support standard_attrs for firewall_group

Status in neutron:
  New

Bug description:
  Currently, some users want to konow what time created (or updated) a
  specific firewall group but it is not support show fields 'created_at'
  nor 'updated_at'.

  So we want to support standard_attrs for firewall_group.
  Anyone has other ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986906/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996677] [NEW] [OVN] support update fixed_ips of metadata port

2022-11-15 Thread Liu Xie
Public bug reported:

In some scenarios, the customers want to modify the fixed_ips of metadata port.
We can workaround it by follow steps:

1.Fisrt, update fixed_ips of metaadta_port:
neutron port-update --fixed-ip 
subnet_id=e130a5c7-6f47-4c76-b245-cf05369f2161,ip_address=192.168.111.16 
460dffa9-e25a-437d-8252-ae9c5185aaab

2.Then, only trigger subnet updating:
neutron subnet-update --enable-dhcp e130a5c7-6f47-4c76-b245-cf05369f2161

3.Finally restart neutron-ovn-metadata-agent


I think it is a good requirement that support update fixed_ips of metadata 
port. Maybe we can implement it in neutron-ovn-metadata-agent that watching the 
UPDATE event of port_bidnings, and then update_datapath if row is related to 
metadata port.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996677

Title:
  [OVN] support update fixed_ips of metadata port

Status in neutron:
  New

Bug description:
  In some scenarios, the customers want to modify the fixed_ips of metadata 
port.
  We can workaround it by follow steps:

  1.Fisrt, update fixed_ips of metaadta_port:
  neutron port-update --fixed-ip 
subnet_id=e130a5c7-6f47-4c76-b245-cf05369f2161,ip_address=192.168.111.16 
460dffa9-e25a-437d-8252-ae9c5185aaab

  2.Then, only trigger subnet updating:
  neutron subnet-update --enable-dhcp e130a5c7-6f47-4c76-b245-cf05369f2161

  3.Finally restart neutron-ovn-metadata-agent

  
  I think it is a good requirement that support update fixed_ips of metadata 
port. Maybe we can implement it in neutron-ovn-metadata-agent that watching the 
UPDATE event of port_bidnings, and then update_datapath if row is related to 
metadata port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996677/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999209] [NEW] [ovn]dnat_adn_snat will not changed if updated fixed_ips of internal port

2022-12-08 Thread Liu Xie
Public bug reported:

As the title describes, if we updated fixed_ips of one internal port
which associated a floatingip, but the dnat_adn_snat entry in ovn will
not changed.

** Affects: neutron
 Importance: Undecided
 Assignee: Liu Xie (liushy)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999209

Title:
  [ovn]dnat_adn_snat will not changed if updated fixed_ips of internal
  port

Status in neutron:
  New

Bug description:
  As the title describes, if we updated fixed_ips of one internal port
  which associated a floatingip, but the dnat_adn_snat entry in ovn will
  not changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999209/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012332] [NEW] [rfe] Add one api support CRUD allowed_address_pairs

2023-03-20 Thread Liu Xie
Public bug reported:

Currently, many customers are using k8s for their VMs, and there is a
need to add or remove allowed_address_pairs. However, frequent update
neutron port is time-consuming. Can we provide an API only support CRUD
allowed_address_pairs?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012332

Title:
  [rfe] Add one api support CRUD allowed_address_pairs

Status in neutron:
  New

Bug description:
  Currently, many customers are using k8s for their VMs, and there is a
  need to add or remove allowed_address_pairs. However, frequent update
  neutron port is time-consuming. Can we provide an API only support
  CRUD allowed_address_pairs?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012332/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2016504] [NEW] [rf[rfe]Support specify fixed_ip_address for DHCP or Metadata port

2023-04-17 Thread Liu Xie
Public bug reported:

Currently, the IP address of the DHCP port is automatically assigned the
first available IP and cannot be specified when creating a subnet.
Therefore, can we provide an API that supports specifying the DHCP
address when creating a subnet?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2016504

Title:
  [rf[rfe]Support specify fixed_ip_address for DHCP or  Metadata port

Status in neutron:
  New

Bug description:
  Currently, the IP address of the DHCP port is automatically assigned
  the first available IP and cannot be specified when creating a subnet.
  Therefore, can we provide an API that supports specifying the DHCP
  address when creating a subnet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2016504/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2090913] [NEW] [ovn]Allow multiple IPv6 ports on router from same network on ml2/ovn

2024-12-03 Thread Liu Xie
Public bug reported:

As we know, ovn_l3 implements the DVR mode router.
For the DVR router interface, there is a patch[1] that allows multiple IPv6 
ports on a router from the same network (ml2/ovs+dvr).
I believe this is also needed for the OVN plugin.

Does anyone have any opinions on this?

[1]https://review.opendev.org/c/openstack/neutron/+/870079

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2090913

Title:
  [ovn]Allow multiple IPv6 ports on router from same network on ml2/ovn

Status in neutron:
  New

Bug description:
  As we know, ovn_l3 implements the DVR mode router.
  For the DVR router interface, there is a patch[1] that allows multiple IPv6 
ports on a router from the same network (ml2/ovs+dvr).
  I believe this is also needed for the OVN plugin.

  Does anyone have any opinions on this?

  [1]https://review.opendev.org/c/openstack/neutron/+/870079

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2090913/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp