[Yahoo-eng-team] [Bug 2078425] [NEW] ovn/ml2 selfservice (isolated network) dns-nameserver set in subnet, can't resolve domain

2024-08-30 Thread Evt.Li
Public bug reported:

Hi,

I met a bit strange problem, currently I use ovn/ml2 and configured
geneve + flat to organize my network.

Following the guidelines, I created two networks, internal and external,
as named, for internal communication within the VMs and for creating
floating ip (external) to assign to the VMs in order for them to access
the public network.

Here are some openstack commands as follows (enable dhcp by default)
### internal network & subnet
openstack network create internal --provider-network-type geneve
openstack subnet create internal-subnet --network internal \
--subnet-range 192.168.200.0/24 --gateway 192.168.200.1 \
--dns-nameserver 8.8.8.8

### external network & subnet, flat, physnet1 is mapping
openstack network create \
--provider-physical-network physnet1 \
--provider-network-type flat --external external
openstack subnet create external-subnet \
--network external--subnet-range 192.168.5.0/24 \
--allocation-pool start=192.168.5.200,end=192.168.5.220 \
--gateway 192.168.5.1 --dns-nameserver 9.9.9.9 --no-dhcp

### of course, a router is a must
openstack router create router-001
openstack router add subnet router-001 internal-subnet
openstack router add gateway router-001 external

### bridge & ports, compute node as the same
root@master-01 ~(keystone)# ovs-vsctl list-br
br-external
br-int
root@master-01 ~(keystone)# ovs-vsctl list-ports br-external
eno3
patch-provnet-76e95ef2-261e-4d51-b89e-48ddb26d0bcf-to-br-int
root@master-01 ~(keystone)#

### mapping is here, use different physical NICs to differentiate between 
management and VM data transfers
{hostname=master-01, ovn-bridge-mappings="physnet1:br-external", 
ovn-cms-options=enable-chassis-as-gw, ovn-encap-ip="10.20.0.10", 
ovn-encap-type=geneve, ovn-remote="tcp:10.10.0.10:6642", 
rundir="/var/run/openvswitch", system-id="121c0c62-b0fb-4441-bf10-14d429b4bcd8"}
...

### sb information
ovn-sbctl show
Chassis "121c0c62-b0fb-4441-bf10-14d429b4bcd8"
hostname: master-01
Encap geneve
ip: "10.20.0.10"
options: {csum="true"}
Port_Binding cr-lrp-1641f948-ac28-422a-99a4-e056372812cd
Chassis "c8bd82a0-a930-48ad-a3f0-48ce81713dfc"
hostname: compute-01
Encap geneve
ip: "10.20.0.20"
options: {csum="true"}
Port_Binding "fced94e8-c0f7-43ba-be90-3d93983c4b4a"
Port_Binding "6102cb9b-e42f-4f37-abe1-d0765f4b6c4e"

### nb information
ovn-nbctl show
switch 5751184d-39ef-4a10-bb72-9fc5285caa7e 
(neutron-7c28892d-5864-403a-8a8d-03180770db60) (aka internal)
port 6102cb9b-e42f-4f37-abe1-d0765f4b6c4e
addresses: ["fa:16:3e:02:8e:f4 192.168.200.153"]
port 6cec6fa7-1a80-4b20-b96d-bd79d528ad64
type: localport
addresses: ["fa:16:3e:94:ff:e1 192.168.200.2"]
port beb8f8d3-7d18-4d02-b786-9927b7ad2d17
addresses: ["fa:16:3e:85:52:47 192.168.200.214"]
port 2e495938-68da-44e4-b2c0-20031f8faa59
type: router
router-port: lrp-2e495938-68da-44e4-b2c0-20031f8faa59
port fced94e8-c0f7-43ba-be90-3d93983c4b4a
addresses: ["fa:16:3e:df:1e:63 192.168.200.44"]
port 9ba1c2e0-8035-422b-a632-8f8fb7f217a3
addresses: ["fa:16:3e:b4:7d:14 192.168.200.179"]
switch c91f2dd7-9fbb-4f75-9b05-20db0d5173c7 
(neutron-8a7b3145-393a-46fc-a019-c0676671db40) (aka external)
port 1641f948-ac28-422a-99a4-e056372812cd
type: router
router-port: lrp-1641f948-ac28-422a-99a4-e056372812cd
port provnet-76e95ef2-261e-4d51-b89e-48ddb26d0bcf
type: localnet
addresses: ["unknown"]
port 1163cb86-7241-492a-a8a6-f6cd51f661f0
type: localport
addresses: ["fa:16:3e:15:06:23"]
router 63145aba-6fac-4cbd-bfb8-a7adb9daa361 
(neutron-7fa5c121-0385-4fa8-ba24-cb6988c87aaa) (aka router-001)
port lrp-1641f948-ac28-422a-99a4-e056372812cd
mac: "fa:16:3e:11:2c:9a"
networks: ["192.168.5.209/24"]
gateway chassis: [121c0c62-b0fb-4441-bf10-14d429b4bcd8 
c8bd82a0-a930-48ad-a3f0-48ce81713dfc]
port lrp-2e495938-68da-44e4-b2c0-20031f8faa59
mac: "fa:16:3e:fd:f8:67"
networks: ["192.168.200.1/24"]
nat 00f4a3b9-35cc-41b0-b2dc-dc6fb174c313
external ip: "192.168.5.216"
logical ip: "192.168.200.44"
type: "dnat_and_snat"
nat 16a4b585-9e25-41f6-9e9c-9923f2ffc236
external ip: "192.168.5.201"
logical ip: "192.168.200.153"
type: "dnat_and_snat"

### dhcp information
ovn-nbctl dhcp-options-get-options 88e0b506-23b9-4925-9f82-aa329370a7bb
server_mac=fa:16:3e:44:c0:57
router=192.168.200.1
server_id=192.168.200.1
mtu=1442
classless_static_route={169.254.169.254/32,192.168.200.2, 
0.0.0.0/0,192.168.200.1}
lease_time=43200
dns_server={8.8.8.8}

### boot the VM to configure a NIC and point it to the internal network
openstack server create --flavor m1.small --image Ubuntu-24-04 --security-group 
secgroup-001 --nic net-id=internal --key-name mykey ubuntu2404-001

### only icmp and ssh ports are open in the security group, 5201 is for ipref3, 
it'

[Yahoo-eng-team] [Bug 2078432] [NEW] Port_hardware_offload_type API extension is reported as available but attribute is not set for ports

2024-08-30 Thread Slawek Kaplonski
Public bug reported:

This API extension is implemented as ML2 plugin's extension but API
extension is added also to the _supported_extension_aliases list
directly in the ML2 plugin. Because of that even if ML2 extension is not
really loaded, this API extension is reported as available. Because of
that 'hardware_offload_type' attribute send from client is accepted by
neutron but it is not saved in the db at all:


$ openstack port create --network private --extra-property 
type=str,name=hardware_offload_type,value=switchdev test-port-hw-offload 


+-+-+



 
| Field   | Value   
|   


  
+-+-+



 
| admin_state_up  | UP  
|   


  
| allowed_address_pairs   | 
|   


  
| binding_host_id | 
|   


  
| binding_profile | 
|   


  
| binding_vif_details | 
|   


  
| binding_vif_type| unbound 
|   


  
| binding_vnic_type   | normal  
|   


  
| created_at  | 2024-08-30T09:15:55Z
|   


  
| data_plane_status   | None
|   
  

[Yahoo-eng-team] [Bug 2078434] [NEW] Creating port with hardware_offload_type attribute set fails with error 500

2024-08-30 Thread Slawek Kaplonski
Public bug reported:

Wne port_hardware_offload_type ml2 extension is enabled and port with
hardware_offload_attribute is created it may fail with error 500 if
there is no binding:profile field provided (and it is of type
'Sentinel'). Error is:

...
ERROR neutron.pecan_wsgi.hooks.translation with 
excutils.save_and_reraise_exception():  



  
ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__   



ERROR neutron.pecan_wsgi.hooks.translation self.force_reraise() 



  
ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise  



ERROR neutron.pecan_wsgi.hooks.translation raise self.value 



  
ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 1132, in 
_call_on_ext_drivers



ERROR neutron.pecan_wsgi.hooks.translation getattr(driver.obj, 
method_name)(plugin_context, data, result)  


   
ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_hardware_offload_type.py",
 line 44, in process_create_port


  
ERROR neutron.pecan_wsgi.hooks.translation 
self._process_create_port(context, data, result)



   
ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/db/port_hardware_offload_type_db.py", line 41, in 
_process_create_port


  
ERROR neutron.pecan_wsgi.hooks.translation capabilities = 
pb_profile.get('capabilities', [])  
 
ERROR neutron.pecan_wsgi.hooks.translation AttributeError: 'Sentinel' object 
has no attribute 'get

We are catching there TypeError exception but we should also catch
AttributeError in the same way.

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078434

Title:
  Creating port with hardware_offload_type attribute set fails with
  error 500

Status in neutron:
  Confirmed

Bug description:
  Wne port_hardware_offload_type ml2 extension is enabled and po

[Yahoo-eng-team] [Bug 2078476] [NEW] rbd_store_chunk_size defaults to 8M not 4M

2024-08-30 Thread Piotr Parczewski
Public bug reported:

Versions affected: from current master to at least Yoga.

The documentation
(https://docs.openstack.org/glance/2024.1/configuration/configuring.html#configuring-
the-rbd-storage-backend) states that the default rbd_store_chunk_size
defaults to 4M while in reality it's 8M. This could have been 'only' a
documentation bug, but there are two concerns here:

1) Was it the original intention to have 8M chunk size (which is
different from Ceph's defaults = 4M) or was it an inadvertent effect of
other changes?

2) Cinder defaults to rbd_store_chunk_size=4M. Having volumes created
from Glance images results in an inherited chunk size of 8M (due to
snapshotting) and could have unpredicted performance consequences. It
feels like this scenario should at least be documented, if not avoided.

Steps to reproduce:
- deploy Glance with RBD backend enabled and default config;
- query stores information for the configured chunk size 
(/v2/info/stores/detail)
Optional:
- have an image created in Ceph pool and validate its chunk size with rbd info 
command.

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- rbd_store_chunk_size defaults to 8192M not 4096M
+ rbd_store_chunk_size defaults to M not 4M

** Summary changed:

- rbd_store_chunk_size defaults to M not 4M
+ rbd_store_chunk_size defaults to 8M not 4M

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2078476

Title:
  rbd_store_chunk_size defaults to 8M not 4M

Status in Glance:
  New

Bug description:
  Versions affected: from current master to at least Yoga.

  The documentation
  
(https://docs.openstack.org/glance/2024.1/configuration/configuring.html#configuring-
  the-rbd-storage-backend) states that the default rbd_store_chunk_size
  defaults to 4M while in reality it's 8M. This could have been 'only' a
  documentation bug, but there are two concerns here:

  1) Was it the original intention to have 8M chunk size (which is
  different from Ceph's defaults = 4M) or was it an inadvertent effect
  of other changes?

  2) Cinder defaults to rbd_store_chunk_size=4M. Having volumes created
  from Glance images results in an inherited chunk size of 8M (due to
  snapshotting) and could have unpredicted performance consequences. It
  feels like this scenario should at least be documented, if not
  avoided.

  Steps to reproduce:
  - deploy Glance with RBD backend enabled and default config;
  - query stores information for the configured chunk size 
(/v2/info/stores/detail)
  Optional:
  - have an image created in Ceph pool and validate its chunk size with rbd 
info command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2078476/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075147] Re: "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925376
Committed: 
https://opendev.org/openstack/neutron/commit/76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Submitter: "Zuul (22348)"
Branch:master

commit 76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Author: Rodolfo Alonso Hernandez 
Date:   Wed Jul 31 10:53:14 2024 +

Monkey patch the system libraries before calling them

The Neutron API with WSGI module, and specifically when using ML2/OVN,
was importing some system libraries before patching them. That was
leading to a recursion error, as reported in the related LP bug.
By calling ``eventlet_utils.monkey_patch()`` at the very beginning
of the WSGI entry point [1], this issue is fixed.

[1] WSGI entry point:
  $ cat /etc/neutron/neutron-api-uwsgi.ini
  ...
  module = neutron.wsgi.api:application

Closes-Bug: #2075147
Change-Id: If2aa37b2a510a85172da833ca20564810817d246


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2075147

Title:
  "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

Status in neutron:
  Fix Released

Bug description:
  The Neutron CI job "neutron-tempest-plugin-api-ovn-wsgi" is not
  working because TLS is enabled. There is an issue in the SSL library
  that throws a recursive exception.

  Snippet https://paste.opendev.org/show/briEIdk5z5SwYg25axnf/

  Log:
  
https://987c691fdc28f24679c7-001d480fc44810e6cf7b18a72293f87e.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  tempest-plugin-api-ovn-wsgi/8e01634/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2075147/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929805] Re: Can't remove records in 'Create Record Set' form in DNS dashboard

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/793420
Committed: 
https://opendev.org/openstack/horizon/commit/3b222c85c1e07ad0f55da93460520e1a07713a54
Submitter: "Zuul (22348)"
Branch:master

commit 3b222c85c1e07ad0f55da93460520e1a07713a54
Author: Vadym Markov 
Date:   Wed May 26 16:01:49 2021 +0300

CSS fix makes "Delete item" button active

Currently, used in designate-dashboard at DNS Zones - Create Record Set
modal window

Closes-Bug: #1929805
Change-Id: Ibcc97927df4256298a5c8d5e9834efa9ee498291


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1929805

Title:
  Can't remove records in 'Create Record Set' form in DNS dashboard

Status in Designate Dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Reproduced on devstack with master, but seems that any setup with
  Designate since Mitaka is affected.

  Steps to reproduce:

  1. Go to Project/DNS/Zones page 
  2. Create a Zone
  3. Click on ‘Create Record Set’ button at the right of the Zone record
  4. Try to fill several ‘Record’ fields in the ‘Records’ section of the form, 
then to delete data in the field with 'x' button

  Expected behavior:
  Record deleted

  Actual behavior:
  'x' button is inactive

  It is bug in CSS used in array widget in Horizon, but currently this
  array widget used only in designate-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1929805/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078434] Re: Creating port with hardware_offload_type attribute set fails with error 500

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078434

Title:
  Creating port with hardware_offload_type attribute set fails with
  error 500

Status in neutron:
  Fix Released

Bug description:
  Wne port_hardware_offload_type ml2 extension is enabled and port with
  hardware_offload_attribute is created it may fail with error 500 if
  there is no binding:profile field provided (and it is of type
  'Sentinel'). Error is:

  ...
  ERROR neutron.pecan_wsgi.hooks.translation with 
excutils.save_and_reraise_exception():  



  
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__   



  ERROR neutron.pecan_wsgi.hooks.translation self.force_reraise()   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise  



  ERROR neutron.pecan_wsgi.hooks.translation raise self.value   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 1132, in 
_call_on_ext_drivers



  ERROR neutron.pecan_wsgi.hooks.translation getattr(driver.obj, 
method_name)(plugin_context, data, result)  


   
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_hardware_offload_type.py",
 line 44, in process_create_port


  
  ERROR neutron.pecan_wsgi.hooks.translation 
self._process_create_port(context, data, result)


  

[Yahoo-eng-team] [Bug 2078432] Re: Port_hardware_offload_type API extension is reported as available but attribute is not set for ports

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078432

Title:
  Port_hardware_offload_type API extension is reported as available but
  attribute is not set for ports

Status in neutron:
  Fix Released

Bug description:
  This API extension is implemented as ML2 plugin's extension but API
  extension is added also to the _supported_extension_aliases list
  directly in the ML2 plugin. Because of that even if ML2 extension is
  not really loaded, this API extension is reported as available.
  Because of that 'hardware_offload_type' attribute send from client is
  accepted by neutron but it is not saved in the db at all:

  
  $ openstack port create --network private --extra-property 
type=str,name=hardware_offload_type,value=switchdev test-port-hw-offload 

  
  
+-+-+



 
  | Field   | Value 
  | 



  
+-+-+



 
  | admin_state_up  | UP
  | 



  | allowed_address_pairs   |   
  | 



  | binding_host_id |   
  | 



  | binding_profile |   
  | 



  | binding_vif_details |   
  | 



  | binding_vif_type 

[Yahoo-eng-team] [Bug 2078518] [NEW] neutron designate scenario job failing with new RBAC

2024-08-30 Thread Ghanshyam Mann
Public bug reported:

Oslo.policy 4.4.0 enabled the new RBAC defaults by default, which does
not change any config on the neutron side because neutron already
enabled the new defaults, but it enabled the designated new RBAC. That
is causing the neutron-tempest-plugin-designate-scenario job failing.

It is failing here
- https://review.opendev.org/c/openstack/neutron/+/926085

And this is a debugging change
- https://review.opendev.org/c/openstack/neutron/+/926945/7

I see from the log that the admin designate client is getting the error.
If you see the below log, its designate_admin is getting an error while
creating the recordset in the designate

Aug 09 19:08:30.539307 np0038166723 neutron-server[86674]: ERROR
neutron_lib.callbacks.manager
designate_admin.recordsets.create(in_addr_zone_name,

https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-
q-svc.txt#7665

https://github.com/openstack/neutron/blob/b847d89ac1f922362945ad610c9787bc28f37457/neutron/services/externaldns/drivers/designate/driver.py#L92

which is caused by the GET Zone returning 403 in designateclient

https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-q-svc.txt#7674
I compared the designate Zone RBAC default if any change in that causing it:

Old policy: admin or owner
New policy: admin or project reader

https://github.com/openstack/designate/blob/50f686fcffd007506e0cd88788a668d4f57febc3/designate/common/policies/zone.py
Only difference in policy is if it is not admin then it check role also member 
and reader needs only have access. But here neutron try to access with admin 
role only.

I tried to query designate with "'all_projects': True" in admin
designate client request but still it fail

https://zuul.opendev.org/t/openstack/build/25be97774e3a4d72a39eb6b2d2bed4a0/log/controller/logs/screen-
q-svc.txt#7716

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078518

Title:
  neutron designate scenario job failing with new RBAC

Status in neutron:
  New

Bug description:
  Oslo.policy 4.4.0 enabled the new RBAC defaults by default, which does
  not change any config on the neutron side because neutron already
  enabled the new defaults, but it enabled the designated new RBAC. That
  is causing the neutron-tempest-plugin-designate-scenario job failing.

  It is failing here
  - https://review.opendev.org/c/openstack/neutron/+/926085

  And this is a debugging change
  - https://review.opendev.org/c/openstack/neutron/+/926945/7

  I see from the log that the admin designate client is getting the
  error. If you see the below log, its designate_admin is getting an
  error while creating the recordset in the designate

  Aug 09 19:08:30.539307 np0038166723 neutron-server[86674]: ERROR
  neutron_lib.callbacks.manager
  designate_admin.recordsets.create(in_addr_zone_name,

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-
  q-svc.txt#7665

  
https://github.com/openstack/neutron/blob/b847d89ac1f922362945ad610c9787bc28f37457/neutron/services/externaldns/drivers/designate/driver.py#L92

  which is caused by the GET Zone returning 403 in designateclient

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-q-svc.txt#7674
  I compared the designate Zone RBAC default if any change in that causing it:

  Old policy: admin or owner
  New policy: admin or project reader

  
https://github.com/openstack/designate/blob/50f686fcffd007506e0cd88788a668d4f57febc3/designate/common/policies/zone.py
  Only difference in policy is if it is not admin then it check role also 
member and reader needs only have access. But here neutron try to access with 
admin role only.

  I tried to query designate with "'all_projects': True" in admin
  designate client request but still it fail

  
https://zuul.opendev.org/t/openstack/build/25be97774e3a4d72a39eb6b2d2bed4a0/log/controller/logs/screen-
  q-svc.txt#7716

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078518] Re: neutron designate scenario job failing with new RBAC

2024-08-30 Thread Ghanshyam Mann
** Also affects: designate
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078518

Title:
  neutron designate scenario job failing with new RBAC

Status in Designate:
  New
Status in neutron:
  New

Bug description:
  Oslo.policy 4.4.0 enabled the new RBAC defaults by default, which does
  not change any config on the neutron side because neutron already
  enabled the new defaults, but it enabled the designated new RBAC. That
  is causing the neutron-tempest-plugin-designate-scenario job failing.

  It is failing here
  - https://review.opendev.org/c/openstack/neutron/+/926085

  And this is a debugging change
  - https://review.opendev.org/c/openstack/neutron/+/926945/7

  I see from the log that the admin designate client is getting the
  error. If you see the below log, its designate_admin is getting an
  error while creating the recordset in the designate

  Aug 09 19:08:30.539307 np0038166723 neutron-server[86674]: ERROR
  neutron_lib.callbacks.manager
  designate_admin.recordsets.create(in_addr_zone_name,

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-
  q-svc.txt#7665

  
https://github.com/openstack/neutron/blob/b847d89ac1f922362945ad610c9787bc28f37457/neutron/services/externaldns/drivers/designate/driver.py#L92

  which is caused by the GET Zone returning 403 in designateclient

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-q-svc.txt#7674
  I compared the designate Zone RBAC default if any change in that causing it:

  Old policy: admin or owner
  New policy: admin or project reader

  
https://github.com/openstack/designate/blob/50f686fcffd007506e0cd88788a668d4f57febc3/designate/common/policies/zone.py
  Only difference in policy is if it is not admin then it check role also 
member and reader needs only have access. But here neutron try to access with 
admin role only.

  I tried to query designate with "'all_projects': True" in admin
  designate client request but still it fail

  
https://zuul.opendev.org/t/openstack/build/25be97774e3a4d72a39eb6b2d2bed4a0/log/controller/logs/screen-
  q-svc.txt#7716

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/2078518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp