+1 makes sense. Thanks for doing this validation @chris.macnaughton
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage notifications
I've filed https://bugs.launchpad.net/charm-mysql-router/+bug/1973177 is
track this seperatly
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250
Title:
[focal] charm becomes blocked with workload
One of the causes of a charm going into a "Failed to connect to MySQL"
state is that a connection to the database failed when the db-router
charm attempted to restart the db-router service. Currently the charm
will only retry the connection in response to one return code from the
mysql. The return
Public bug reported:
[Impact]
* ceph-iscsi on Focal talking to a Pacific or later Ceph cluster
* rbd-target-api service fails to start if there is a blocklist
entry for the unit.
* When the rbd-target-api service starts it checks if any of the
ip addresses on the machine it is running o
*** This bug is a duplicate of bug 1883112 ***
https://bugs.launchpad.net/bugs/1883112
** This bug has been marked a duplicate of bug 1883112
rbd-target-api crashes with python TypeError
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Public bug reported:
While testing using openstack, guests failed to launch and these denied
messages were logged:
[ 8307.089627] audit: type=1400 audit(1649684291.592:109):
apparmor="DENIED" operation="mknod" profile="swtpm"
name="/run/libvirt/qemu/swtpm/11-instance-000b-swtpm.sock"
pid=1412
** Patch added: "ceph-iscsi-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+attachment/5569987/+files/ceph-iscsi-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
Verification on impish failed due to
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python
Public bug reported:
The rbd-target-api fails to start on Ubuntu Impish (21.10) and later.
This appears to be caused by a werkzeug package revision check in rbd-
target-api. The check is used to decide whather to add an
OpenSSL.SSL.Context or a ssl.SSLContext. The code comment suggests that
ssl.SS
Tested successfully on focal with 3.4-0ubuntu2.1
Tested with ceph-iscsi charms functional tests which were previously
failing.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.04.4 LTS
Release:20.04
Codename: focal
$ apt-cache policy c
** Patch added: "gw-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5569162/+files/gw-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
Thank you for the update Robie. I proposed the deb diff based on the fix
that had landed upstream because I (wrongly) thought that was what the
SRU policy required. I think it makes more sense to go for the minimal
fix you suggest.
--
You received this bug notification because you are a member of
** Description changed:
+ [Impact]
+
+ * rbd-target-api service fails to start if there is a blocklist
+entry for the unit making the service unavailable.
+
+ * When the rbd-target-api service starts it checks if any of the
+ip addresses on the machine it is running on are listed as
+
** Patch added: "deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5562748/+files/deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-
** Changed in: ceph-iscsi (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage notifications
s/The issue appears when using the mysql to/The issue appears when using
the mysql shell to/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306
Title:
Action `remove-instance` works but appears t
I don't think this is a charm bug. The issue appears when using the
mysql to remove a node from the cluster. From what I can see you cannot
persist group_replication_force_members and is correctly unset. So the
error being reported seems wrong
https://pastebin.ubuntu.com/p/sx6ZB3rs6r/
root@juju-1
** Also affects: mysql-8.0 (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-mysql-innodb-cluster
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/19
Perhaps I'm missing something but this does not seem to be a bug in the
rabbitmq-server charm. It may be easier to observe there but the root
cause is elsewhere.
** Changed in: charm-rabbitmq-server
Status: New => Invalid
--
You received this bug notification because you are a member of U
Tested successfully on focal victoria using 1:11.0.0-0ubuntu1~cloud1 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
c
Tested successfully on focal wallaby using 2:12.0.0-0ubuntu2~cloud0 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
c
Tested successfully on hirsute using 2:12.0.0-0ubuntu2 . I created an
encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create
: charm-layer-ovn
Status: New => Confirmed
** Changed in: charm-layer-ovn
Importance: Undecided => High
** Changed in: charm-layer-ovn
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subs
** Changed in: charm-neutron-gateway
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: charm-neutron-gateway
Importance: Undecided => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchp
** Changed in: charm-neutron-gateway
Status: Invalid => Confirmed
** Changed in: neutron (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424
Titl
A patch was introduced [0] "..which sets the backup gateway
device link down by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.".
This change causes an issue when using keepalived 2.X (focal+) which
is fix
** Also affects: neutron (Ubuntu)
Importance: Undecided
Status: New
** Changed in: neutron (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943863
Title
I have tested the rocky scenario that was failing for me. Trilio on
Train + OpenStack on Rocky. The Trilio functional test to snapshot a
server failed without the fix and passed once python3-oslo.messaging
8.1.0-0ubuntu1~cloud2.2 was installed and services restarted
** Tags removed: verification-r
Public bug reported:
It seems that updating the role attribute of a connection has no affect
on existing connections. For example when investigating another bug I
needed to disable rbac but to get that to take effect I needed to either
restart the southbound listener or the ovn-controller.
fwiw t
Public bug reported:
When using Openstack Ussuri with OVN 20.03 and adding a floating IP
address to a port the ovn-controller on the hypervisor repeatedly
reports:
2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error:
{"details":"RBAC rules for client
\"juju-eab186-zaza-d26c8c079cc7-
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
focal victoria functional tests which create an ovn loadbalancer and
check it is functional.
The log of the test run is here:
https://openstack-ci-
reports.u
I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
groovy victoria functional tests which create an ovn loadbalancer and
check it is fuctional.
The log of the test run is here:
https://openstack-ci-
reports.ubu
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia-
provider/+git/ovn-octavia-provider/+merge/397023
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cann
** Description changed:
- Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail
- with "Provider 'ovn' does not support a requested option: OVN provider
- does not support allowed_cidrs option" showing up in the o-api logs.
+ [Impact]
- We've tracked that to check [1] getting
** Also affects: ovn-octavia-provider (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cannot create listener d
I have tested focal and groovy and is only happening on groovy. I have
not tried Hirsute.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199
Title:
[groovy-victoria] "gwcli /iscsi-targets/ create
I don;t think this is a charm issue. It looks like an incompatibility
between ceph-isci and python3-werkzeug in groovy.
# /usr/bin/rbd-target-api
* Serving Flask app "rbd-target-api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production
Yep thats the traceback I'm seeing.
Charm shows:
2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error:
Traceback (most recent call last):
File
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
line 74, in main
bus.dispatch(restricted=r
It seems sqlalchemy-utils may have been removed recently in error
https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
ht
Public bug reported:
Package seems to be missing a dependency on sqlalchemy-utils *1. The
issue shows itself when running masakari-manage with the new 'taskflow'
section enabled *2
*1
https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29
*2 https://review.opendev.o
HAving looked into it further it seems to be the name of the node that
has changed.
juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu
juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes c
Public bug reported:
Testing of masakari on focal zaza tests failed because the test checks
that all pacemaker nodes are online. This check failed due the
appearance of a new node called 'node1' which was marked as offline. I
don't know where that node came from or what is supposed to represent
bu
The source option was not set properly for the ceph application leading
to the python rbd lib being way ahead of the ceph cluster.
** Changed in: charm-glance
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-glance
Status: New => Invalid
** Changed in:
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741
Title:
Using ceph as a backing store fails on ussuri
To manage not
** Summary changed:
- rbd pool name is hardcoded
+ Checks fail when creating an iscsi target
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838
Title:
skipchecks=true is needed when deployed on
Public bug reported:
See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the
line:
"If not using RHEL/CentOS or using an upstream or ceph-iscsi-test
kernel, the skipchecks=true argument must be used. This will avoid the
Red Hat kernel and rpm checks:"
** Affects: ceph-iscsi (Ubuntu)
Public bug reported:
ceilometer-collector fails to stop if it cannot connect to message
broker.
To reproduce (assuming amqp is running on localhost):
1) Comment out the 'oslo_messaging_rabbit' section from
/etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look
locally f
Sahid pointed out that the swift-init will traverse a search path and
start a daemon for every config file it finds so no change to the init
script is needed. Initial tests suggest this completely covers my use
case. I will continue testing and report back. I will mark the bug as
invalid for the mo
Hi Sahid,
In our deployment for swift global replication we have two account services.
One for local and one for replication:
# cat /etc/swift/account-server/1.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
workers = 1
[pipeline:main]
pipeline = recon account-server
[filter:recon]
use = egg
Hi Cory, the init script update is to support swift global replication.
The upstream code and the proposed changes to the charm support the
feature in mitaka so ideally the support would go right back to trusty-
mitaka.
--
You received this bug notification because you are a member of Ubuntu
Bugs
** Description changed:
- On swift proxy servers there are three groups of services: account,
+ On swift storage servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator
Public bug reported:
On swift proxy servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator etc
Each service has its own init script but all the services in a group are
configu
I can confirm that the disco proposed repository fixes this issue.
I have run the openstack teams mojo spec for disco stein which fails due
to this bug. I then reran the test with the charms configured to install
from the disco proposed repository and the bug was fixed and the tests
passed.
Log f
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed
upstream here *1.
Thanks
Liam
*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
--
You received thi
** Changed in: dpdk (Ubuntu)
Status: Invalid => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713
Title:
Metadata is broken with dpdk bonding, jumbo frames and metadata from
qdhcp
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled.
Setup:
root@node-licetu
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If on
At some point when I was attempting to simplify the test case I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.
** Changed in: dpdk (Ubuntu)
Status: New => Invalid
--
You received this b
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm
** Also affects: dpdk (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
I think this is a packaging bug
** Also affects: designate (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-designate
Status: Triaged => Invalid
** Changed in: charm-designate
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received th
stack-dashboard (3:15.0.0-0ubuntu1~cloud0) ...", thanks.
** Changed in: charm-openstack-dashboard
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-openstack-dashboard
Status: New => Incomplete
--
You received this bug notification because you are a membe
** Changed in: charm-openstack-dashboard
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075
Title:
[19.04][Queens -> Rocky] python3-pymy
The package from rocky-proposed worked for me. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1~cloud0
Candidate: 0.26.1-0ubuntu2.1~cloud0
Version table:
*** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
bionic-proposed/rocky
The cosmic package worked for me to. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1
Candidate: 0.26.1-0ubuntu2.1
Version table:
*** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64
Packages
100 /var/lib/dpkg/s
The disco package worked for me to. Version info below:
# apt-cache policy python3-glance-store
python3-glance-store:
Installed: 0.28.0-0ubuntu1.1
Candidate: 0.28.0-0ubuntu1.1
Version table:
*** 0.28.0-0ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 Pac
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive
:stein-proposed
$ openstack image create --public --file
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has ei
It does not appear to have been fixed upstream yet as this patch is
still in place at master:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
** Description changed:
[Impact]
If we upload a large image (larger than 1G), the glance_store will hit a
Unicode error. To fix this a patch has been merged in upstream master and
backported to stable rocky.
[Test Case]
+ Deploy glance related to swift-proxy using the object-store relat
Hi koalinux, please can you provide the requested logs or remove the
field-critical tag please ?
** Changed in: cloud-archive
Status: New => Incomplete
** Changed in: ceph (Ubuntu)
Status: New => Incomplete
** Changed in: libvirt (Ubuntu)
Status: New => Incomplete
--
You r
** Description changed:
Description:-
So while testing python3 with Fedora in [1], Found an issue while
running nova-api behind wsgi. It fails with below Traceback:-
2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -]
I don't think this is related to the charm, it looks like a bug in
upstream nova.
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: nova (Ubuntu)
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notificati
** Changed in: charm-aodh
Status: New => Invalid
** Changed in: oslo.i18n
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799406
Title:
[SRU] Alarms fail on Rock
I have successfully run the mojo spec which was failing
(specs/full_stack/next_openstack_upgrade/queens). This boots an instance
on rocky which indirectly queries glance:
https://pastebin.canonical.com/p/7sVjF6QSNm/
** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done
** Description changed:
Hi,
When running unit tests under Python 3.7 when building the Rocky Debian
package in Sid, I get a never ending recursion. Please see the Debian
bug report:
https://bugs.debian.org/911947
Basically, it's this:
| File "/build/1st/glance-17.0.0/gl
Just to be clear, when I say I'm hitting it I mean I'm hitting it on a
deployed system, not just in unit tests.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1800601
Title:
[SRU] Infinite recursion
Marking charm bug as invalid inlight of the packaging fix
** Changed in: charm-openstack-dashboard
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1754508
Title:
** Changed in: ntp (Ubuntu)
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1414925
Title:
ntpd seems to add offsets instead of subtracting them
I'm going to mark this is invalid against nova-compute as nova-compute
does not have a relation with percona anymore (Icehouse+ I believe).
** Changed in: charm-nova-compute
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
kernel-bug-exists-upstream
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes can bind to the same port
I've retested with linux-
image-4.4.9-040409-generic_4.4.9-040409.201605041832_amd64.deb but the
image seems to persist.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes ca
Thanks jsalisbury. Ill try on another kernel now.
The steps to reproduce on xenial:
sudo su -
apt install --yes apache2 haproxy
echo "
listen test
bind *:8776
bind :::8776
" > /etc/haproxy/haproxy.cfg
echo "
Listen 8776
DocumentRoot /var/www/html
" > /etc/apache2/sites-enabl
Thanks for the suggestions. I will try with an upstream kernel and also
add steps for reproducing
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes can bind to the same port
Public bug reported:
On both xenial and zesty apache and haproxy seem to be able to bind to
the same port:
# netstat -peanut | grep 8776
tcp0 0 0.0.0.0:87760.0.0.0:* LISTEN
0 76856 26190/haproxy
tcp6 0 0 :::8776
Ok, so this has been broken in the charm for a while. The package
shipped vhost should be disabled by the charm but due to a bug that is
not happening.
However xenial and zesty both seem to allow apache to start when it has
a conflicting port with haproxy. If haproxy is running and bound to 8776
o
** Changed in: charm-cinder
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720215
Title:
[artful] apachectl: Address already in use: AH00
** Also affects: murano (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1598208
Title:
murano uses deprecated psutil.NUM_CPUS
To manage notificati
I think this is fixed in yakkety
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1637138
Title:
The trove-guest agent service does not start on xenial
To manage notifications about this bug go to:
ht
Public bug reported:
When starting the trove guest agent on xenial it fails with:
2016-10-27 09:31:38.674 1366 CRITICAL root [-] NameError: global name '_LE' is
not defined
2016-10-27 09:31:38.674 1366 ERROR root Traceback (most recent call last):
2016-10-27 09:31:38.674 1366 ERROR root File "
Public bug reported:
After the package is installed some of the files that support the
initialisation of the database seem to missing. As does the policy.json.
The files that are missing:
/usr/lib/python2.7/dist-packages/mistral/actions/openstack/mapping.json
/usr/lib/python2.7/dist-packages/mis
** Changed in: ceph-osd (Juju Charms Collection)
Milestone: 16.07 => 16.10
** Changed in: ceph (Juju Charms Collection)
Milestone: 16.07 => 16.10
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
** Changed in: keystone (Juju Charms Collection)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1578351
Title:
mitaka ksclient fails to connect to v6 keys
** Changed in: neutron-openvswitch (Juju Charms Collection)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1546565
Title:
Ownership/Permissions of vhost_u
** No longer affects: hacluster (Juju Charms Collection)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1488453
Title:
Package postinst always fail on first install when using systemd
To manage noti
** Changed in: cinder (Juju Charms Collection)
Status: In Progress => Fix Committed
** Changed in: neutron-gateway (Ubuntu)
Status: In Progress => Fix Committed
** Changed in: rabbitmq-server (Juju Charms Collection)
Status: In Progress => Fix Committed
--
You received this
Yes, removing '-d' fixed it, thank you
** Changed in: libvirt (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1567272
Title:
systemd claims libvirt-bin is dead
** Description changed:
- On Xenial systemd is not reporting the state of libvirtd properly or
+ On Xenial, systemd is not reporting the state of libvirtd properly or
shutting down on request.
# pgrep libvirtd
# systemctl start libvirt-bin.service
# systemctl status libvirt-bin.service
Public bug reported:
On Xenial, systemd is not reporting the state of libvirtd properly or
shutting down on request.
# pgrep libvirtd
# systemctl start libvirt-bin.service
# systemctl status libvirt-bin.service
● libvirt-bin.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/l
** Changed in: neutron-gateway (Ubuntu)
Status: New => In Progress
** Changed in: neutron-gateway (Ubuntu)
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
d
Status: New
** Changed in: rabbitmq-server (Juju Charms Collection)
Status: New => In Progress
** Changed in: rabbitmq-server (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: rabbitmq-server (Juju Charms Collection)
Importance
Charms Collection)
Importance: Undecided => Medium
** Changed in: cinder (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: cinder (Juju Charms Collection)
Milestone: None => 16.04
** Also affects: ceph-radosgw (Juju Charms Collection)
Import
1 - 100 of 165 matches
Mail list logo