[Yahoo-eng-team] [Bug 2079996] [NEW] [OVN] OVN metadata agent check to restart the HAProxy container

2024-09-09 Thread Rodolfo Alonso
Public bug reported:

Since [1], we restart the HAProxy process of each network (datapath) in
order to "honor any potential changes in their configuration." [2].

This process could slow down the OVN Metadata agent restart and could
potentially interfere with a VM boot-up if the HAProxy process is
restarted in the middle.

This bug proposes an optimization that checks the IPv6 support of the
HAProxy running process to decide to restart it or not.

[1]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8
[2]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8#diff-95903c989a1d043a90abe006cedd7ec20bd7a36855c3219cd74580cfa125c82fR349-R351

** Affects: neutron
 Importance: Low
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Description changed:

  Since [1], we restart the HAProxy process of each network (datapath) in
  order to "honor any potential changes in their configuration." [2].
  
  This process could slow down the OVN Metadata agent restart and could
  potentially interfere with a VM boot-up if the HAProxy process is
  restarted in the middle.
  
  This bug proposes an optimization that checks the IPv6 support of the
- HAProxy to decide to restart it or not.
+ HAProxy running process to decide to restart it or not.
  
  
[1]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8
  
[2]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8#diff-95903c989a1d043a90abe006cedd7ec20bd7a36855c3219cd74580cfa125c82fR349-R351

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079996

Title:
  [OVN] OVN metadata agent check to restart the HAProxy container

Status in neutron:
  New

Bug description:
  Since [1], we restart the HAProxy process of each network (datapath)
  in order to "honor any potential changes in their configuration." [2].

  This process could slow down the OVN Metadata agent restart and could
  potentially interfere with a VM boot-up if the HAProxy process is
  restarted in the middle.

  This bug proposes an optimization that checks the IPv6 support of the
  HAProxy running process to decide to restart it or not.

  
[1]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8
  
[2]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8#diff-95903c989a1d043a90abe006cedd7ec20bd7a36855c3219cd74580cfa125c82fR349-R351

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079996/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078476] Re: rbd_store_chunk_size defaults to 8M not 4M

2024-09-09 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/927844
Committed: 
https://opendev.org/openstack/glance/commit/39e407e9ffe956d40a261905ab98c13b5455e27d
Submitter: "Zuul (22348)"
Branch:master

commit 39e407e9ffe956d40a261905ab98c13b5455e27d
Author: Cyril Roelandt 
Date:   Tue Sep 3 17:25:54 2024 +0200

Documentation: fix default value for rbd_store_chunk_size

Closes-Bug: #2078476
Change-Id: I3b83e57eebf306c4de28fd58589522970e62cf42


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2078476

Title:
  rbd_store_chunk_size defaults to 8M not 4M

Status in Glance:
  Fix Released

Bug description:
  Versions affected: from current master to at least Antelope.

  The documentation
  
(https://docs.openstack.org/glance/2024.1/configuration/configuring.html#configuring-
  the-rbd-storage-backend) states that the default rbd_store_chunk_size
  defaults to 4M while in reality it's 8M. This could have been 'only' a
  documentation bug, but there are two concerns here:

  1) Was it the original intention to have 8M chunk size (which is
  different from Ceph's defaults = 4M) or was it an inadvertent effect
  of other changes?

  2) Cinder defaults to rbd_store_chunk_size=4M. Having volumes created
  from Glance images results in an inherited chunk size of 8M (due to
  snapshotting) and could have unpredicted performance consequences. It
  feels like this scenario should at least be documented, if not
  avoided.

  Steps to reproduce:
  - deploy Glance with RBD backend enabled and default config;
  - query stores information for the configured chunk size 
(/v2/info/stores/detail)
  Optional:
  - have an image created in Ceph pool and validate its chunk size with rbd 
info command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2078476/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2080072] [NEW] Failed to delete vpnaas ipsec-site-connections with 502 error, ORM session: SQL execution without transaction in progress

2024-09-09 Thread Ihar Hrachyshka
Public bug reported:

This was triggered in gate here:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_5e9/928461/4/check/neutron-
tempest-plugin-vpnaas/5e965fe/testr_results.html

The test traceback:

ft1.1: tearDownClass 
(neutron_tempest_plugin.vpnaas.scenario.test_vpnaas.Vpnaas6in6)testtools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/test.py", line 246, in tearDownClass
raise value.with_traceback(trace)
  File "/opt/stack/tempest/tempest/test.py", line 210, in tearDownClass
teardown()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/vpnaas/api/base_vpnaas.py",
 line 51, in resource_cleanup
cls._try_delete_resource(
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 332, in _try_delete_resource
delete_callable(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 112, in _delete
resp, body = self.delete(uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 359, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 762, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 856, in 
_error_checker
raise exceptions.UnexpectedContentType(str(resp.status),
tempest.lib.exceptions.UnexpectedContentType: Unexpected content type provided
Details: 502

The request is:

2024-09-09 16:55:44.459 89493 INFO tempest.lib.common.rest_client [-] Request 
(Vpnaas6in6:tearDownClass): 502 DELETE 
https://10.209.0.221/networking/v2.0/vpn/ipsec-site-connections/f5ce2f15-6b6d-4323-8c79-efeab2c06ad6
 300.098s
2024-09-09 16:55:44.460 89493 DEBUG tempest.lib.common.rest_client [-] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: None
Response - Headers: {'date': 'Mon, 09 Sep 2024 16:50:44 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'content-length': '420', 'connection': 'close', 
'content-type': 'text/html; charset=iso-8859-1', 'status': '502', 
'content-location': 
'https://10.209.0.221/networking/v2.0/vpn/ipsec-site-connections/f5ce2f15-6b6d-4323-8c79-efeab2c06ad6'}
Body: b'\n\n502 Proxy 
Error\n\nProxy Error\nThe proxy server 
received an invalid\r\nresponse from an upstream server.\r\nThe proxy 
server could not handle the requestReason: Error reading from remote 
server\n\nApache/2.4.52 (Ubuntu) Server at 
10.209.0.221 Port 443\n\n' _log_request_full 
/opt/stack/tempest/tempest/lib/common/rest_client.py:484

neutron log is filled with these for the duration of the (eventually
failed) request - 5mins:

Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
WARNING neutron.objects.base [None req-a3a367f8-aeb8-4767-a96b-69b1c05a6a38 
tempest-Vpnaas6in6-389378637 tempest-Vpnaas6in6-389378637-project-member] ORM 
session: SQL execution without transaction in progress, traceback:
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/paste/urlmap.py", line 211, 
in __call__
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
return app(environ, start_response)
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/webob/dec.py", line 129, in 
__call__
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
resp = self.call_func(req, *args, **kw)
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/webob/dec.py", line 193, in 
call_func
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
return self.func(req, *args, **kwargs)
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_middleware/base.py", 
line 124, in __call__
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
response = req.get_response(self.application)
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/webob/request.py", line 
1313, in send
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
status, headers, app_iter = self.call_application(
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]:   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/webob/request.py", line 
1278, in call_application
Sep 09 16:50:44.375166 np0038439257 devstack@neutron-api.service[60639]: 
app_iter = application(self.environ, start_r

[Yahoo-eng-team] [Bug 2075349] Re: JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC auth endpoint

2024-09-09 Thread Takashi Kajinami
** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

** Changed in: puppet-keystone
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: puppet-keystone
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075349

Title:
  JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC
  auth endpoint

Status in OpenStack Keystone OIDC Integration Charm:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  In Progress

Bug description:
  This bug is about test failures for jammy-caracal, jammy-bobcat, and
  jammy-antelope in cherry-pick commits from this change:

  https://review.opendev.org/c/openstack/charm-keystone-openidc/+/922049

  That change fixed some bugs in the Keystone OpenIDC charm and added
  some additional configuration options to help with proxies.

  The tests all fail with a JSONDecodeError during the Zaza tests for
  the Keystone OpenIDC charm. Here is an example of the error:

  Expecting value: line 1 column 1 (char 0)
  Traceback (most recent call last):
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
974, in json
  return complexjson.loads(self.text, **kwargs)
    File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
  raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/cliff/app.py", line 
414, in run_subcommand
  self.prepare_to_run_command(cmd)
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/shell.py", 
line 516, in prepare_to_run_command
  self.client_manager.auth_ref
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/clientmanager.py", 
line 208, in auth_ref
  self._auth_ref = self.auth.get_auth_ref(self.session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/federation.py",
 line 62, in get_auth_ref
  auth_ref = self.get_unscoped_auth_ref(session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/oidc.py",
 line 293, in get_unscoped_auth_ref
  return access.create(resp=response)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/access/access.py",
 line 36, in create
  body = resp.json()
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
978, in json
  raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
  requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
  clean_up ListServer: Expecting value: line 1 column 1 (char 0)
  END return value: 1

  According to debug output, the failure happens during the OIDC
  authentication flow. Testing using the OpenStack CLI shows the failure
  happen right after this request:

  REQ: curl -g -i --insecure -X POST 
https://10.70.143.111:5000/v3/OS-FEDERATION/identity_providers/keycloak/protocols/openid/auth
 -H "Authorization: 
{SHA256}45dbb29ea555e0bd24995cbb1481c8ac66c2d03383bc0c335be977d0daaf6959" -H 
"User-Agent: openstacksdk/3.3.0 keystoneauth1/5.7.0 python-requests/2.32.3 
CPython/3.10.12"
  Starting new HTTPS connection (1): 10.70.143.111:5000
  RESP: [200] Connection: Keep-Alive Content-Length: 0 Date: Tue, 30 Jul 2024 
19:28:17 GMT Keep-Alive: timeout=75, max=1000 Server: Apache/2.4.52 (Ubuntu)
  RESP BODY: Omitted, Content-Type is set to None. Only text/plain, 
application/json responses have their bodies logged.

  This request is unusual in that the request is a POST request with no
  request body, and the response is an empty response. The empty
  response causes the JSONDecodeError because the keystoneauth package
  expects a JSON document to return from the request for a Keystone
  token. The empty response causes the JSONDecodeError because an empty
  string is not a valid document.

  This strange behavior happens due to a misconfiguration in the
  mod_auth_openidc Apache configuration. I looked up how Kolla-Ansible
  configures OpenIDC in Keystone, and I noticed that they used a
  different value for the OIDCRedirectURI in the mod_auth_openidc Apache
  configuration than the Keystone OpenIDC charm. The value of
  OIDCRedirectURI is supposed to be a fake URI that does not map to any
  real URI in the protected service. The fake URI should be protected by
  mod_auth_openidc in Apache's configuration. Whe