Just as @seb128 said in
https://bugs.launchpad.net/ubuntu/+source/network-manager-
openvpn/+bug/1993634/comments/5 the GUI part is what is currently
missing.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bu
Apparently Ceph has started to produce Quincy packages for Jammy.
-> https://download.ceph.com/debian-quincy/dists/jammy/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043336
Title:
[SRU] ceph 17.2
While this is fixed for Victoria with 17.3, there are are no packages
provided by Cloud Archive yet - https://bugs.launchpad.net/cloud-
archive/+bug/1947518/comments/27
** Also affects: ubuntu
Importance: Undecided
Status: New
** Package changed: ubuntu => cloud-archive
--
You receive
As I just ran into this very issue, may I ask when a new package for
Victoria is going to be released.
According to https://openstack-ci-reports.ubuntu.com/reports/cloud-
archive/victoria_versions.html
17.3.0 is still only in proposed and updates are still at
17.2.1-0ubuntu1~cloud2 which does NOT
Thanks for picking up on this Lucas!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958629
Title:
Deprecation warnings about Proc.new
To manage notifications about this bug go to:
https://bugs.laun
Lucas Kanashiro any chance this could be fixed for Focal?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958629
Title:
Deprecation warnings about Proc.new
To manage notifications about this bug go
This is still very broken even this the promoted fix of
https://review.opendev.org/c/openstack/python-openstackclient/+/792950/
and running with the VERY latest and greatest:
```
openstack --os-volume-api-version 3.60 limits show --absolute --project
$SOMEPROJECT
```
returns the same data for all
Public bug reported:
Upon running e.g. Puppet's r10k, which makes use of ruby-faraday
(version 0.15.4-3), a few lines of deprecation warnings are thrown on
every invocation:
# /usr/bin/r10k deploy environment -p
/usr/lib/ruby/vendor_ruby/faraday/options.rb:166: warning: Capturing the given
blo
I suppose it's https://git.launchpad.net/~ubuntu-openstack-
dev/ubuntu/+source/glance/commit/?id=39ce4e7eafc33ef2ddc61c338230b4afe1eeb79b
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1955022
Title:
Thanks Corey for picking this up so quickly. Do you mind sharing a link to
where/what was changed?
Will there be any backporting for packaged of current OpenStack releases if I
may ask?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Summary changed:
- Init script for glance-api ignores any additional config files
+ Init script for glance-api ignores additional config files
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1955022
Public bug reported:
The init script at /etc/init.d/glance-api (likely created by the
openstack-pkg-tools) uses the
```
PROJECT_NAME=glance
NAME=${PROJECT_NAME}-api
CONFIG_FILE=/etc/${PROJECT_NAME}/${NAME}.conf
```
to later add the daemon-arg ``--config-file /etc/glance/glance-
api.conf``
This
Thanks Sebastien for getting back to me.
Tools like arping are used as helpers within other tools or scripts. A very
important point of integration is the return code. Something returning non-zero
is considered to have failed.
It's simply not an option to sell people to add multiple lines of cod
Public bug reported:
When running a gratuitous / unsolicited arp via "arping -U" on Ubuntu Focal
a return code of 1 is given, instead of always 0 (as there is no arp reply
expected).
Focal:
--- cut ---
# arping -U -c1 -I eth0 127.0.0.1; echo "ReturnCode: $?"
ARPING 127.0.0.1 from 127.0.0.1 eth0
This post to the ML seems related / hitting the same
issue:http://lists.openstack.org/pipermail/openstack-
discuss/2021-October/025292.html
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940976
Title:
@johnsom thanks a bunch for getting back to me.
NO I am not using a coordinator. As discussed on IRC, Designate
apparently requires a DLM even when running just as a single instance to
coordinate multiple updates to a single zone?
As I wrote in https://review.opendev.org/c/openstack/designate/+/7
@Slawek are you also pushing the new packages to Ubuntu Cloud Archive?
i.e. https://openstack-ci-reports.ubuntu.com/reports/cloud-
archive/ussuri_versions.html does not show any 16.4.1 as of yet.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed t
@johnsom Could you elaborate what I should check for exactly. I tested
this while having reduced our set of three designate instances to just
one to rule out any side-effects in this regard.
Are you talking about setting up a coordinator as described here?
https://docs.openstack.org/designate/late
** Also affects: cloud-archive
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940976
Title:
Race condition in zone serial generation on concurrent changes
** Also affects: designate (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940976
Title:
Race condition in zone serial generation on concurrent ch
@hopem thanks for your nice reply and the complete overview of the
situation.
I do understand the issue with exception handling and propagation between
privsep and the reader.
As one cannot catch all exceptions or erroneous conditions that systems might
reach, a major improvement would be to con
Thanks all for really digging into the issue!
Quite honestly reverting that one commit might have fixed the observed issue.
But having an potential ~3 second delay in the code path should not have this
impact at all.
What I am trying to say is that there might be a whole other issue with timing
** Also affects: python-msgpack (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1937261
Title:
python3-msgpack package broken due to outdated cytho
One more update on the actual cause of the missing cmsg shared object of
msgpack in the Ubuntu Cloud Archive package for Ussuri on Ubuntu Bionic
...
Even with cython available the build simply fails, but that that is gracefully
ignored:
--- cut ---
[...]
unning build_py
I dug a little deeper into the issue ... apparently the python3-msgpack
package provided by Ubuntu Cloud Archive does not contain the cmsg
extension - that's why msgpack is using its pure python, but slow,
fallback.
python3-msgpack=0.5.6 from Ubuntu Bionic contains:
--- cut --
# dpkg -L python3
Public bug reported:
After a successful upgrade of the control-plance from Train -> Ussuri on
Ubuntu Bionic, we upgraded a first compute / network node and
immediately ran into issues with Neutron:
We noticed that Neutron is extremely slow in setting up and wiring the
network ports, so slow it wo
Corey, Jared I believe your analysis is running a little in the wrong
direction here:
1) We run OpenStack TRAIN (15) and also experienced the described
issues. So there cannot be any relation to the database schema upgrades.
2) We did experience the issue even before the recent upgrade and we bel
Yes, Billy Olsen, we are a little in doubt about this
(https://bugs.launchpad.net/neutron/+bug/1927868/comments/18) as well.
We have been observing such non functioning gateways on our Train
installation occasionally also before this patch / update.
Usually a "clear gateway" and a recreation via
We run OpenStack Train on Ubuntu Bionic and observe similar issues with
L3-HA routers after having updated from 15.3.2 -> 15.3.3.
Currently we are still collecting evidence, but, in case there is an
issue with a certain router, we already observed that the gateway
interfaces on all nodes running a
This issue also affects OpenStack Train - see my proposed backport at:
https://bugs.launchpad.net/tripleo/+bug/1865754/comments/33
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1865754
Title:
[SRU]
30 matches
Mail list logo