I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
groovy victoria functional tests which create an ovn loadbalancer and
check it is fuctional.
The log of the test run is here:
https://openstack-ci-
reports.ubu
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
focal victoria functional tests which create an ovn loadbalancer and
check it is functional.
The log of the test run is here:
https://openstack-ci-
reports.u
Public bug reported:
When using Openstack Ussuri with OVN 20.03 and adding a floating IP
address to a port the ovn-controller on the hypervisor repeatedly
reports:
2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error:
{"details":"RBAC rules for client
\"juju-eab186-zaza-d26c8c079cc7-
Public bug reported:
It seems that updating the role attribute of a connection has no affect
on existing connections. For example when investigating another bug I
needed to disable rbac but to get that to take effect I needed to either
restart the southbound listener or the ovn-controller.
fwiw t
I have successfully run the mojo spec which was failing
(specs/full_stack/next_openstack_upgrade/queens). This boots an instance
on rocky which indirectly queries glance:
https://pastebin.canonical.com/p/7sVjF6QSNm/
** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done
** Changed in: charm-aodh
Status: New => Invalid
** Changed in: oslo.i18n
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799406
Title:
[SRU] Alarms fail on Rock
Just to be clear, when I say I'm hitting it I mean I'm hitting it on a
deployed system, not just in unit tests.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1800601
Title:
[SRU] Infinite recursion
** Description changed:
Hi,
When running unit tests under Python 3.7 when building the Rocky Debian
package in Sid, I get a never ending recursion. Please see the Debian
bug report:
https://bugs.debian.org/911947
Basically, it's this:
| File "/build/1st/glance-17.0.0/gl
Public bug reported:
See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the
line:
"If not using RHEL/CentOS or using an upstream or ceph-iscsi-test
kernel, the skipchecks=true argument must be used. This will avoid the
Red Hat kernel and rpm checks:"
** Affects: ceph-iscsi (Ubuntu)
** Summary changed:
- rbd pool name is hardcoded
+ Checks fail when creating an iscsi target
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838
Title:
skipchecks=true is needed when deployed on
Public bug reported:
ceilometer-collector fails to stop if it cannot connect to message
broker.
To reproduce (assuming amqp is running on localhost):
1) Comment out the 'oslo_messaging_rabbit' section from
/etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look
locally f
** Description changed:
[Impact]
If we upload a large image (larger than 1G), the glance_store will hit a
Unicode error. To fix this a patch has been merged in upstream master and
backported to stable rocky.
[Test Case]
+ Deploy glance related to swift-proxy using the object-store relat
It does not appear to have been fixed upstream yet as this patch is
still in place at master:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive
:stein-proposed
$ openstack image create --public --file
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has ei
The disco package worked for me to. Version info below:
# apt-cache policy python3-glance-store
python3-glance-store:
Installed: 0.28.0-0ubuntu1.1
Candidate: 0.28.0-0ubuntu1.1
Version table:
*** 0.28.0-0ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 Pac
The cosmic package worked for me to. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1
Candidate: 0.26.1-0ubuntu2.1
Version table:
*** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64
Packages
100 /var/lib/dpkg/s
The package from rocky-proposed worked for me. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1~cloud0
Candidate: 0.26.1-0ubuntu2.1~cloud0
Version table:
*** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
bionic-proposed/rocky
Public bug reported:
On swift proxy servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator etc
Each service has its own init script but all the services in a group are
configu
** Description changed:
- On swift proxy servers there are three groups of services: account,
+ On swift storage servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator
I can confirm that the disco proposed repository fixes this issue.
I have run the openstack teams mojo spec for disco stein which fails due
to this bug. I then reran the test with the charms configured to install
from the disco proposed repository and the bug was fixed and the tests
passed.
Log f
I have tested the rocky scenario that was failing for me. Trilio on
Train + OpenStack on Rocky. The Trilio functional test to snapshot a
server failed without the fix and passed once python3-oslo.messaging
8.1.0-0ubuntu1~cloud2.2 was installed and services restarted
** Tags removed: verification-r
I don;t think this is a charm issue. It looks like an incompatibility
between ceph-isci and python3-werkzeug in groovy.
# /usr/bin/rbd-target-api
* Serving Flask app "rbd-target-api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production
I have tested focal and groovy and is only happening on groovy. I have
not tried Hirsute.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199
Title:
[groovy-victoria] "gwcli /iscsi-targets/ create
** Also affects: ovn-octavia-provider (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cannot create listener d
** Description changed:
- Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail
- with "Provider 'ovn' does not support a requested option: OVN provider
- does not support allowed_cidrs option" showing up in the o-api logs.
+ [Impact]
- We've tracked that to check [1] getting
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia-
provider/+git/ovn-octavia-provider/+merge/397023
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cann
Public bug reported:
Package seems to be missing a dependency on sqlalchemy-utils *1. The
issue shows itself when running masakari-manage with the new 'taskflow'
section enabled *2
*1
https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29
*2 https://review.opendev.o
It seems sqlalchemy-utils may have been removed recently in error
https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
ht
Yep thats the traceback I'm seeing.
Charm shows:
2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error:
Traceback (most recent call last):
File
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
line 74, in main
bus.dispatch(restricted=r
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741
Title:
Using ceph as a backing store fails on ussuri
To manage not
The source option was not set properly for the ceph application leading
to the python rbd lib being way ahead of the ceph cluster.
** Changed in: charm-glance
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-glance
Status: New => Invalid
** Changed in:
Public bug reported:
Testing of masakari on focal zaza tests failed because the test checks
that all pacemaker nodes are online. This check failed due the
appearance of a new node called 'node1' which was marked as offline. I
don't know where that node came from or what is supposed to represent
bu
HAving looked into it further it seems to be the name of the node that
has changed.
juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu
juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes c
I don't think this is related to the charm, it looks like a bug in
upstream nova.
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
** No longer affects: nova (Ubuntu)
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notificati
** Description changed:
Description:-
So while testing python3 with Fedora in [1], Found an issue while
running nova-api behind wsgi. It fails with below Traceback:-
2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -]
Hi koalinux, please can you provide the requested logs or remove the
field-critical tag please ?
** Changed in: cloud-archive
Status: New => Incomplete
** Changed in: ceph (Ubuntu)
Status: New => Incomplete
** Changed in: libvirt (Ubuntu)
Status: New => Incomplete
--
You r
I think this is a packaging bug
** Also affects: designate (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-designate
Status: Triaged => Invalid
** Changed in: charm-designate
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received th
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm
** Also affects: dpdk (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
At some point when I was attempting to simplify the test case I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.
** Changed in: dpdk (Ubuntu)
Status: New => Invalid
--
You received this b
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If on
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled.
Setup:
root@node-licetu
** Changed in: dpdk (Ubuntu)
Status: Invalid => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713
Title:
Metadata is broken with dpdk bonding, jumbo frames and metadata from
qdhcp
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed
upstream here *1.
Thanks
Liam
*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
--
You received thi
** Changed in: charm-openstack-dashboard
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075
Title:
[19.04][Queens -> Rocky] python3-pymy
stack-dashboard (3:15.0.0-0ubuntu1~cloud0) ...", thanks.
** Changed in: charm-openstack-dashboard
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-openstack-dashboard
Status: New => Incomplete
--
You received this bug notification because you are a membe
Marking charm bug as invalid inlight of the packaging fix
** Changed in: charm-openstack-dashboard
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1754508
Title:
** Changed in: charm-cinder
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720215
Title:
[artful] apachectl: Address already in use: AH00
Public bug reported:
On both xenial and zesty apache and haproxy seem to be able to bind to
the same port:
# netstat -peanut | grep 8776
tcp0 0 0.0.0.0:87760.0.0.0:* LISTEN
0 76856 26190/haproxy
tcp6 0 0 :::8776
Ok, so this has been broken in the charm for a while. The package
shipped vhost should be disabled by the charm but due to a bug that is
not happening.
However xenial and zesty both seem to allow apache to start when it has
a conflicting port with haproxy. If haproxy is running and bound to 8776
o
Thanks for the suggestions. I will try with an upstream kernel and also
add steps for reproducing
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes can bind to the same port
Thanks jsalisbury. Ill try on another kernel now.
The steps to reproduce on xenial:
sudo su -
apt install --yes apache2 haproxy
echo "
listen test
bind *:8776
bind :::8776
" > /etc/haproxy/haproxy.cfg
echo "
Listen 8776
DocumentRoot /var/www/html
" > /etc/apache2/sites-enabl
I've retested with linux-
image-4.4.9-040409-generic_4.4.9-040409.201605041832_amd64.deb but the
image seems to persist.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes ca
kernel-bug-exists-upstream
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1720378
Title:
Two processes can bind to the same port
I'm going to mark this is invalid against nova-compute as nova-compute
does not have a relation with percona anymore (Icehouse+ I believe).
** Changed in: charm-nova-compute
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Patching /etc/apparmor.d/abstractions/libvirt-qemu with
=== modified file 'libvirt-qemu'
--- libvirt-qemu2014-05-23 14:09:17 +
+++ libvirt-qemu2014-05-23 14:10:27 +
@@ -25,6 +25,7 @@
/dev/kvm rw,
/dev/ptmx rw,
/dev/kqemu rw,
+ /dev/vhost-net rw,
@{PROC}/*/stat
** Changed in: libvirt (Ubuntu)
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1322568
Title:
nova interface-attach fails
To manage notificati
** Also affects: murano (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1598208
Title:
murano uses deprecated psutil.NUM_CPUS
To manage notificati
** Changed in: ntp (Ubuntu)
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1414925
Title:
ntpd seems to add offsets instead of subtracting them
** Changed in: hacluster (Juju Charms Collection)
Importance: Undecided => Medium
** Changed in: hacluster (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: openstack-dashboard (Juju Charms Collection)
Importance: Undecided => Medium
** Changed in:
** Changed in: neutron-gateway (Ubuntu)
Status: New => In Progress
** Changed in: neutron-gateway (Ubuntu)
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
On Xenial, systemd is not reporting the state of libvirtd properly or
shutting down on request.
# pgrep libvirtd
# systemctl start libvirt-bin.service
# systemctl status libvirt-bin.service
● libvirt-bin.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/l
** Description changed:
- On Xenial systemd is not reporting the state of libvirtd properly or
+ On Xenial, systemd is not reporting the state of libvirtd properly or
shutting down on request.
# pgrep libvirtd
# systemctl start libvirt-bin.service
# systemctl status libvirt-bin.service
Yes, removing '-d' fixed it, thank you
** Changed in: libvirt (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1567272
Title:
systemd claims libvirt-bin is dead
** Changed in: keystone (Juju Charms Collection)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1578351
Title:
mitaka ksclient fails to connect to v6 keys
** Changed in: neutron-openvswitch (Juju Charms Collection)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1546565
Title:
Ownership/Permissions of vhost_u
** Changed in: ceph-osd (Juju Charms Collection)
Milestone: 16.07 => 16.10
** Changed in: ceph (Juju Charms Collection)
Milestone: 16.07 => 16.10
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
Public bug reported:
After the package is installed some of the files that support the
initialisation of the database seem to missing. As does the policy.json.
The files that are missing:
/usr/lib/python2.7/dist-packages/mistral/actions/openstack/mapping.json
/usr/lib/python2.7/dist-packages/mis
Public bug reported:
When starting the trove guest agent on xenial it fails with:
2016-10-27 09:31:38.674 1366 CRITICAL root [-] NameError: global name '_LE' is
not defined
2016-10-27 09:31:38.674 1366 ERROR root Traceback (most recent call last):
2016-10-27 09:31:38.674 1366 ERROR root File "
I think this is fixed in yakkety
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1637138
Title:
The trove-guest agent service does not start on xenial
To manage notifications about this bug go to:
ht
Charms Collection)
Importance: Undecided => Medium
** Changed in: cinder (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: cinder (Juju Charms Collection)
Milestone: None => 16.04
** Also affects: ceph-radosgw (Juju Charms Collection)
Import
d
Status: New
** Changed in: rabbitmq-server (Juju Charms Collection)
Status: New => In Progress
** Changed in: rabbitmq-server (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: rabbitmq-server (Juju Charms Collection)
Importance
** No longer affects: hacluster (Juju Charms Collection)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1488453
Title:
Package postinst always fail on first install when using systemd
To manage noti
** Changed in: cinder (Juju Charms Collection)
Status: In Progress => Fix Committed
** Changed in: neutron-gateway (Ubuntu)
Status: In Progress => Fix Committed
** Changed in: rabbitmq-server (Juju Charms Collection)
Status: In Progress => Fix Committed
--
You received this
** Also affects: python-oslo.messaging (Ubuntu)
Importance: Undecided
Status: New
** Changed in: oslo.messaging
Status: Confirmed => Invalid
** Changed in: nova
Status: New => Invalid
** Changed in: python-oslo.messaging (Ubuntu)
Status: New => Confirmed
--
You r
This is not a charm bug. It looks like an upstart script issue:
# service radosgw status
/usr/bin/radosgw is not running.
# service radosgw start
Starting client.radosgw.gateway...
/usr/bin/radosgw is running.
# service radosgw status
/usr/bin/radosgw is running.
# service radosgw restart
Starting
** Summary changed:
- ceph-radosgw died during deployment
+ ceph-radosgw restart fails
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1477225
Title:
ceph-radosgw restart fails
To manage notificatio
** Package changed: ceph-radosgw (Ubuntu) => ceph (Ubuntu)
** Changed in: ceph (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1477225
Title:
ceph-radosgw res
** Description changed:
+ [Impact]
+
+ On 14.04 the restart target of the sysvinit script brings the service down
+ but almost always fails to bring the service back up again.
+
+ The proposed fix updates /etc/init.d/radosgw and so that the stop target
+ waits for up to 30 seconds for the servic
** Description changed:
[Impact]
On 14.04 the restart target of the sysvinit script brings the service down
- but almost always fails to bring the service back up again.
+ but sometimes fails to bring the service back up again. There is a race
between stop and start and in the failure case
** Description changed:
[Impact]
On 14.04 the restart target of the sysvinit script brings the service down
but sometimes fails to bring the service back up again. There is a race
between stop and start and in the failure case the attempt to bring the service
up runs before the service
Public bug reported:
I have a client setup with OS_CACERT set. All endpoints registered in
keystone are https. I can query neutron, glance, cinder and keystone but
the second and subsequent nova image-list always fails. I can 'fix' it
by restarting nova-api-os-compute and one image-list will wor
I do not see this behaviour on Icehouse, Kilo or Liberty
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1508428
Title:
nova image-list failing with SSL enabled on Juno
To manage notifications about
This is effecting precise/icehouse deployments as they have the old
version of python-amqp
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1472712
Title:
[SRU] Using SSL with rabbitmq prevents communi
** Description changed:
+ Upstream Bug: http://tracker.ceph.com/issues/11140
+
[Impact]
On 14.04 the restart target of the sysvinit script brings the service down
but sometimes fails to bring the service back up again. There is a race
between stop and start and in the failure case the a
** Branch unlinked: lp:~gnuoy/ubuntu/trusty/ceph/1477225
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1477225
Title:
ceph-radosgw restart fails
To manage notifications about this bug go to:
https:
** Branch linked: lp:~gnuoy/ubuntu/trusty/ceph/1477225
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1477225
Title:
ceph-radosgw restart fails
To manage notifications about this bug go to:
https://
The fix went into 2015.1.0 and 2015.1.1 is now in the cloud archive.
** Changed in: nova (Ubuntu)
Status: New => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327218
Title:
Vol
This was fixed in 1.13.1~rc1-0ubuntu1. See changelog:
[ James Page ]
...
* d/container-server.conf: Add missing container-sync section
(LP: #1290813).
** Changed in: swift (Ubuntu)
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of
I have been unable to reproduce this, this may be because the issue has
been fixed in 2015.1.1. Please see if the latest keystone package
resolved this issue for you.
** Changed in: keystone (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member
ter (Juju Charms Collection)
Assignee: (unassigned) => Liam Young (gnuoy)
** Summary changed:
- pxc server 5.6 on Vivid and Wily does not create /var/lib/mysql
+ pxc cluster charm on Vivid and Wily point to old mysql datadir /var/lib/mysql
--
You received this bug notification becau
On a system that uses the charm to install percona-cluster on vivid the
config is pointing at the wrong data dir.
charm install on vivd:
/usr/sbin/mysqld --print-defaults | grep -Eoh 'datadir.*' | awk '{print $1}'
datadir=/var/lib/mysql
pkg install withoutcharm on vivid:
/usr/sbin/mysqld --pr
Public bug reported:
Upgrading openstack-dashboard fails.
2015-10-16 10:34:53 INFO config-changed The following packages have unmet
dependencies:
2015-10-16 10:34:53 INFO config-changed openstack-dashboard-ubuntu-theme :
Depends: openstack-dashboard (= 1:2015.1.1-0ubuntu1~cloud2) but
2:8.0.0
Public bug reported:
There appears to be a bug in dbapps-lib when configuring mysql on a
remote machine. The offending line is:
echo $sql | mysql ${dbc_dbserver:+-h $dbc_dbserver}
${dbc_dbserver:--h localhost} ${dbc_dbport:+--port $dbc_dbport} -u
$dbc_dbuser -p$dbc_dbpass $dbc_dbname
Public bug reported:
If barbican is configured to use a mysql database then the barbican-api
server fails to start with:
2015-12-16 08:07:07.273 20728 CRITICAL barbican [-] BarbicanException: Error
configuring registry database with supplied sql_connection. Got error: No
module named pymysql
20
Public bug reported:
The upstart script in 1:1.0.0-0ubuntu1~cloud0 for barbican-api fails to
start the service.
To reproduce:
# service barbican-api stop
stop: Unknown instance:
# > /var/log/upstart/barbican-api.log
# service barbican-api start
barbican-api start/running, process 27261
# ps
Public bug reported:
The upstart script in 1:1.0.0-0ubuntu1~cloud0 for barbican-api fails to
start the service.
To reproduce:
# /etc/init.d/barbican-api start
# ps -ef | grep uwsgi
root 31
Public bug reported:
Package version 1:1.0.0-0ubuntu1~cloud0
To reproduce:
Start barbican-api manually due to Bug #1526648
# start-stop-daemon --start --chdir /var/lib/barbican --chuid
barbican:barbican --make-pidfile --pidfile /var/run/barbican/barbican-
api.pid --exec /usr/bin/uwsgi -- --mas
Public bug reported:
Package version 1:1.0.0-0ubuntu1~cloud0
Attempting to store a secret results in:
{address space usage: 184709120 bytes/176MB} {rss usage: 60383232 bytes/57MB}
[pid: 13191|app: 0|req: 2/2] 10.5.17.29 () {26 vars in 308 bytes} [Wed Dec 16
08:02:14 2015] GET / => generated 34
*** This bug is a duplicate of bug 230168 ***
https://bugs.launchpad.net/bugs/230168
** Branch unlinked: lp:~gnuoy/charms/trusty/nova-cloud-
controller/stable-230495
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.lau
Hi Thomas,
I've tried to recreate this bug by having ntp correct a clock that is fast
and also one that is slow and ntp seems to be working fine for me. I'll attach
the output of the commands I tested with. Could you include the exact steps
you went through please ?
Thanks
Liam
** Attac
1 - 100 of 165 matches
Mail list logo