I am also seeing this problem in 24.10. `indicator-applet` was
installed. I added `indicator-application` -- no joy. This is a clean
install of Ubuntu Unity 24.04, upgraded.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.
Bug breaks Panwriter on 24.04 and 24.04.1.
Panwriter is _only_ distributed as an Appimage so there's no alternative
format.
Running the appimage with `--no-sandbox` does not help.
What got it working for me is this:
`sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0`
--
You receiv
Attached :)
** Attachment added: "snap-debug-info.log"
https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/2075580/+attachment/5805295/+files/snap-debug-info.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launch
Hi Alex, I've opened 2076575 and ran `apport-collect` on a machine that
is having the same issues as the original reporter - I've marked my bug
as a duplicate of this one.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.la
*** This bug is a duplicate of bug 2075580 ***
https://bugs.launchpad.net/bugs/2075580
** Attachment added: "ProcCpuinfoMinimal.txt"
https://bugs.launchpad.net/bugs/2076575/+attachment/5805090/+files/ProcCpuinfoMinimal.txt
** This bug has been marked a duplicate of bug 2075580
AppArmo
*** This bug is a duplicate of bug 2075580 ***
https://bugs.launchpad.net/bugs/2075580
** Attachment added: "Dependencies.txt"
https://bugs.launchpad.net/bugs/2076575/+attachment/5805089/+files/Dependencies.txt
--
You received this bug notification because you are a member of Ubuntu
Bug
*** This bug is a duplicate of bug 2075580 ***
https://bugs.launchpad.net/bugs/2075580
Public bug reported:
Aug 04 22:48:01 redacted systemd[1]: Starting Load AppArmor profiles managed
internally by snapd...
Aug 04 22:48:01 redacted snapd-apparmor[490231]: main.go:124: Loading profiles
[/va
> Similar to @lproven , I have over half a dozen thinkpads (T/W520/530,
W540/541), all of which depend on 390, and for the 6x kernels, nouveau
isn't working.
For my main "production" machine, a Core i7 T420, I have reluctantly
moved off Ubuntu and I am now using MX Linux on that machine instead. M
You have a typo in the description:
> the massive Y2028 time_t transition
I think you mean Y20*3*8.
Not a big deal, but just for clarity you should probably fix that.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
Further info may be relevant:
https://www.reddit.com/r/UbuntuUnity/comments/1axuqy5/2310_cant_open_keyboard_settings/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043863
Title:
Keyboard Settings
Also affects right shift key, but left still works. I had to use cut and
paste even to log in to register this affects me too.
No Ctrl keys, only 1 shift, can't open keyboard settings.
Goes away on reboot but soon recurs.
--
You received this bug notification because you are a member of Ubuntu
+1 makes sense. Thanks for doing this validation @chris.macnaughton
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage notifications
I have the same problem in Windows Subsystem for Linux, Ubuntu 20.04
I have a CIFS share containing 24 DFS folders.
Opening any subfolder in the share causes an instant kernel panic.
I do not have this problem on embedded hardware reading from the same share
running the xilinx 4.6.0 kernel and 16
I've filed https://bugs.launchpad.net/charm-mysql-router/+bug/1973177 is
track this seperatly
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907250
Title:
[focal] charm becomes blocked with workload
One of the causes of a charm going into a "Failed to connect to MySQL"
state is that a connection to the database failed when the db-router
charm attempted to restart the db-router service. Currently the charm
will only retry the connection in response to one return code from the
mysql. The return
Public bug reported:
[Impact]
* ceph-iscsi on Focal talking to a Pacific or later Ceph cluster
* rbd-target-api service fails to start if there is a blocklist
entry for the unit.
* When the rbd-target-api service starts it checks if any of the
ip addresses on the machine it is running o
*** This bug is a duplicate of bug 1883112 ***
https://bugs.launchpad.net/bugs/1883112
** This bug has been marked a duplicate of bug 1883112
rbd-target-api crashes with python TypeError
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Public bug reported:
While testing using openstack, guests failed to launch and these denied
messages were logged:
[ 8307.089627] audit: type=1400 audit(1649684291.592:109):
apparmor="DENIED" operation="mknod" profile="swtpm"
name="/run/libvirt/qemu/swtpm/11-instance-000b-swtpm.sock"
pid=1412
** Patch added: "ceph-iscsi-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+attachment/5569987/+files/ceph-iscsi-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
Verification on impish failed due to
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python
Public bug reported:
The rbd-target-api fails to start on Ubuntu Impish (21.10) and later.
This appears to be caused by a werkzeug package revision check in rbd-
target-api. The check is used to decide whather to add an
OpenSSL.SSL.Context or a ssl.SSLContext. The code comment suggests that
ssl.SS
Tested successfully on focal with 3.4-0ubuntu2.1
Tested with ceph-iscsi charms functional tests which were previously
failing.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.04.4 LTS
Release:20.04
Codename: focal
$ apt-cache policy c
** Patch added: "gw-deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5569162/+files/gw-deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
Thank you for the update Robie. I proposed the deb diff based on the fix
that had landed upstream because I (wrongly) thought that was what the
SRU policy required. I think it makes more sense to go for the minimal
fix you suggest.
--
You received this bug notification because you are a member of
** Description changed:
+ [Impact]
+
+ * rbd-target-api service fails to start if there is a blocklist
+entry for the unit making the service unavailable.
+
+ * When the rbd-target-api service starts it checks if any of the
+ip addresses on the machine it is running on are listed as
+
** Patch added: "deb.diff"
https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5562748/+files/deb.diff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-
** Changed in: ceph-iscsi (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1883112
Title:
rbd-target-api crashes with python TypeError
To manage notifications
s/The issue appears when using the mysql to/The issue appears when using
the mysql shell to/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1954306
Title:
Action `remove-instance` works but appears t
I don't think this is a charm bug. The issue appears when using the
mysql to remove a node from the cluster. From what I can see you cannot
persist group_replication_force_members and is correctly unset. So the
error being reported seems wrong
https://pastebin.ubuntu.com/p/sx6ZB3rs6r/
root@juju-1
** Also affects: mysql-8.0 (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-mysql-innodb-cluster
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/19
Perhaps I'm missing something but this does not seem to be a bug in the
rabbitmq-server charm. It may be easier to observe there but the root
cause is elsewhere.
** Changed in: charm-rabbitmq-server
Status: New => Invalid
--
You received this bug notification because you are a member of U
Tested successfully on focal victoria using 1:11.0.0-0ubuntu1~cloud1 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
c
Tested successfully on focal wallaby using 2:12.0.0-0ubuntu2~cloud0 . I
created an encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
c
Tested successfully on hirsute using 2:12.0.0-0ubuntu2 . I created an
encrypted volume and attached it to a VM.
cinder type-create LUKS
cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
cinder create
Just to add some info on guest agent here:
the guest agent does not set up the primary interface
there should be no race between guest agent and cloud-init for the
primary interface
the guest agent does not start any dhclient process for primary
interface, and should not care if any dhclient pro
: charm-layer-ovn
Status: New => Confirmed
** Changed in: charm-layer-ovn
Importance: Undecided => High
** Changed in: charm-layer-ovn
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subs
** Changed in: charm-neutron-gateway
Assignee: (unassigned) => Liam Young (gnuoy)
** Changed in: charm-neutron-gateway
Importance: Undecided => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchp
** Changed in: charm-neutron-gateway
Status: Invalid => Confirmed
** Changed in: neutron (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944424
Titl
A patch was introduced [0] "..which sets the backup gateway
device link down by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.".
This change causes an issue when using keepalived 2.X (focal+) which
is fix
** Also affects: neutron (Ubuntu)
Importance: Undecided
Status: New
** Changed in: neutron (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1943863
Title
I had the same issue with 20.04 on a Thinkpad X220.
I managed to resolve it by installing the HWE kernel, adding a dedicated
swap partition on another drive, purging ZRAM, and rebuilding my
`initrd`.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscrib
@jeremie2
Ah, fair enough. Mostly I use Ventoy these days, and once the USB key is
formatted with Ventoy, you just copy .ISO files onto it and they
automagically appear in the Ventoy boot menu. So no need for Balena
Etcher etc. any more. Ventoy itself is bootable on BIOS and UEFI PCs and
on Intel
In replie to @jeremie2 in comment #24:
I don't think this is a general description of the problem, because for
me, my USB boot keys don't have separate EFI boot partitions.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.
*** This bug is a duplicate of bug 1893964 ***
https://bugs.launchpad.net/bugs/1893964
** This bug has been marked a duplicate of bug 1893964
Installation of Ubuntu Groovy with manual partitioning without an EFI System
Partition fails on 'grub-install /dev/sda' even on non-UEFI systems
--
I have tested the rocky scenario that was failing for me. Trilio on
Train + OpenStack on Rocky. The Trilio functional test to snapshot a
server failed without the fix and passed once python3-oslo.messaging
8.1.0-0ubuntu1~cloud2.2 was installed and services restarted
** Tags removed: verification-r
Public bug reported:
It seems that updating the role attribute of a connection has no affect
on existing connections. For example when investigating another bug I
needed to disable rbac but to get that to take effect I needed to either
restart the southbound listener or the ovn-controller.
fwiw t
Public bug reported:
When using Openstack Ussuri with OVN 20.03 and adding a floating IP
address to a port the ovn-controller on the hypervisor repeatedly
reports:
2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error:
{"details":"RBAC rules for client
\"juju-eab186-zaza-d26c8c079cc7-
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
focal victoria functional tests which create an ovn loadbalancer and
check it is functional.
The log of the test run is here:
https://openstack-ci-
reports.u
I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it
passed. I verified it by deploying the octavia charm and running its
groovy victoria functional tests which create an ovn loadbalancer and
check it is fuctional.
The log of the test run is here:
https://openstack-ci-
reports.ubu
Confirmed and reproduced in Xubuntu 20.10 as well. This issue is _not_
confined to Ubuntu Unity and is also present in an official remix.
Steps taken to try to resolve it:
* updated system BIOS (machine is a Lenovo Thinkpad W500; was on 3.18, now on
3.23, latest) -> no change
• tried 2 different
Public bug reported:
Even on BIOS systems with no UEFI
ProblemType: Bug
DistroRelease: Ubuntu 20.10
Package: ubiquity 20.10.13
ProcVersionSignature: Ubuntu 5.8.0-25.26-generic 5.8.14
Uname: Linux 5.8.0-25-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia-
provider/+git/ovn-octavia-provider/+merge/397023
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cann
** Description changed:
- Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail
- with "Provider 'ovn' does not support a requested option: OVN provider
- does not support allowed_cidrs option" showing up in the o-api logs.
+ [Impact]
- We've tracked that to check [1] getting
** Also affects: ovn-octavia-provider (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896603
Title:
ovn-octavia-provider: Cannot create listener d
I have tested focal and groovy and is only happening on groovy. I have
not tried Hirsute.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904199
Title:
[groovy-victoria] "gwcli /iscsi-targets/ create
I don;t think this is a charm issue. It looks like an incompatibility
between ceph-isci and python3-werkzeug in groovy.
# /usr/bin/rbd-target-api
* Serving Flask app "rbd-target-api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production
Public bug reported:
Crashed during install.
ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: ubiquity 20.04.15.2
ProcVersionSignature: Ubuntu 5.4.0-42.46-generic 5.4.44
Uname: Linux 5.4.0-42-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0u
I've never heard of the 'empty python3-google-compute-engine
transitional package'; for upstream packaging, we use "Conflicts:
python3-google-compute-engine" and this will cause the top level package
(called google-compute-engine upstream, I think called gce-compute-
image-packages in Ubuntu) to be
Please also apply this change to the google-guest-agent package
** Also affects: google-guest-agent (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1
*** This bug is a duplicate of bug 1900897 ***
https://bugs.launchpad.net/bugs/1900897
** Also affects: google-guest-agent (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https:
Public bug reported:
Upstream's build parameters;
override_dh_auto_build:
dh_auto_build -O--buildsystem=golang -- -ldflags="-s -w -X
main.version=$(VERSION)-$(RELEASE)" -mod=readonly
- Strip the binary
- Set main.version
** Affects: google-osconfig-agent (Ubuntu)
Importance: Unde
It's a complicated situation, but I'll try to highlight some of the
reasons.
First, there is the complexity of existing files. We will only copy the
file if no file already exists because it may exist from the previous,
python guest which automatically generated this file. There are also the
.temp
Systemd provides that functionality itself, internally. We don't want to
use UCF or mark this as a config file. We want to copy the file once on
installation iff it doesn't exist. It is otherwise an 'example' file.
--
You received this bug notification because you are a member of Ubuntu
Bugs, whi
The way that this file is managed has changed as part of this
replacement, and many customers have automatic updates enabled. We chose
not to mark this file as a config file, as we don't want that dialog to
appear. We only ever copy the file into place if it doesn't already
exist, and after that, i
I have looked at this package on a testing image in GCE. The instance
configs file has been shipped differently in this package vs ours - here
you are shipping it as /etc/defaults/instance_configs.cfg, we ship to
/usr/share/google-guest-agent/instance_configs.cfg
There are two problems with this c
Yep thats the traceback I'm seeing.
Charm shows:
2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error:
Traceback (most recent call last):
File
"/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py",
line 74, in main
bus.dispatch(restricted=r
It seems sqlalchemy-utils may have been removed recently in error
https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
ht
Public bug reported:
Package seems to be missing a dependency on sqlalchemy-utils *1. The
issue shows itself when running masakari-manage with the new 'taskflow'
section enabled *2
*1
https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29
*2 https://review.opendev.o
Public bug reported:
Opening a bug for this since all other bugs that reported this have been
closed.
On an X11 session, a dead secondary mouse is displayed when the scaling
for a user session has been set to 125% (fractional scaling).
Presumably, the dead cursor is a left-over from the login scr
HAving looked into it further it seems to be the name of the node that
has changed.
juju deploy cs:bionic/ubuntu bionic-ubuntu
juju deploy cs:focal/ubuntu focal-ubuntu
juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker"
juju run --unit focal-ubuntu/0 "sudo apt install --yes c
Public bug reported:
Testing of masakari on focal zaza tests failed because the test checks
that all pacemaker nodes are online. This check failed due the
appearance of a new node called 'node1' which was marked as offline. I
don't know where that node came from or what is supposed to represent
bu
The source option was not set properly for the ceph application leading
to the python rbd lib being way ahead of the ceph cluster.
** Changed in: charm-glance
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-glance
Status: New => Invalid
** Changed in:
** Also affects: glance (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873741
Title:
Using ceph as a backing store fails on ussuri
To manage not
** Summary changed:
- Handbrake Crash when selecting source after Xubuntu install
+ Handbrake Crash when selecting source after fresh 20.04 install
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/187031
Program terminated with signal SIGSEGV, Segmentation fault.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318
Title:
Handbrake Crash when selecting source after Xubuntu install
To manage notifi
I repeated the above with fresh 20.04 install (Gnome), and get the same
issue.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870318
Title:
Handbrake Crash when selecting source after Xubuntu instal
Public bug reported:
I had an up to date install of Ubuntu 20.04 (as of 1st April), I had
used Handbrake several times successfully.
I then install Xubuntu core over the top.
Handbrake still opens but upon selecting the DVD source, it crashes.
Instead of loading/processing.
Description:Ubu
** Summary changed:
- rbd pool name is hardcoded
+ Checks fail when creating an iscsi target
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1864838
Title:
skipchecks=true is needed when deployed on
Public bug reported:
See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the
line:
"If not using RHEL/CentOS or using an upstream or ceph-iscsi-test
kernel, the skipchecks=true argument must be used. This will avoid the
Red Hat kernel and rpm checks:"
** Affects: ceph-iscsi (Ubuntu)
Public bug reported:
ceilometer-collector fails to stop if it cannot connect to message
broker.
To reproduce (assuming amqp is running on localhost):
1) Comment out the 'oslo_messaging_rabbit' section from
/etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look
locally f
Sahid pointed out that the swift-init will traverse a search path and
start a daemon for every config file it finds so no change to the init
script is needed. Initial tests suggest this completely covers my use
case. I will continue testing and report back. I will mark the bug as
invalid for the mo
Hi Sahid,
In our deployment for swift global replication we have two account services.
One for local and one for replication:
# cat /etc/swift/account-server/1.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6002
workers = 1
[pipeline:main]
pipeline = recon account-server
[filter:recon]
use = egg
Hi Cory, the init script update is to support swift global replication.
The upstream code and the proposed changes to the charm support the
feature in mitaka so ideally the support would go right back to trusty-
mitaka.
--
You received this bug notification because you are a member of Ubuntu
Bugs
** Description changed:
- On swift proxy servers there are three groups of services: account,
+ On swift storage servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator
Public bug reported:
On swift proxy servers there are three groups of services: account,
container and object.
Each of these groups is comprised of a number of services, for instance:
server, auditor, replicator etc
Each service has its own init script but all the services in a group are
configu
I can confirm that the disco proposed repository fixes this issue.
I have run the openstack teams mojo spec for disco stein which fails due
to this bug. I then reran the test with the charms configured to install
from the disco proposed repository and the bug was fixed and the tests
passed.
Log f
Hi Christian,
Thanks for your comments. I'm sure you spotted it but just to make it
clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed
upstream here *1.
Thanks
Liam
*1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html
--
Yo
** Changed in: dpdk (Ubuntu)
Status: Invalid => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833713
Title:
Metadata is broken with dpdk bonding, jumbo frames and metadata from
qdhcp
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If a server has an ovs bridge with a dpdk device for external
network access and a network namespace attached then sending data out of
the namespace fails if jumbo frames are enabled.
Setup:
root@node-licetu
Ubuntu: eoan
DPDK pkg: 18.11.1-3
OVS DPDK pkg: 2.11.0-0ubuntu2
Kerenl: 5.0.0-20-generic
If two servers each have an ovs bridge with a dpdk device for external
network access and a network namespace attached then communication
between taps in the namespaces fails if jumbo frames are enabled. If on
At some point when I was attempting to simplify the test case I
dropped setting the mtu on the dpdk devices via ovs so the above test is
invalid. I've marked the bug against dpdk as invalid while I redo the
tests.
** Changed in: dpdk (Ubuntu)
Status: New => Invalid
--
You received this b
Given the above I'm am going to mark this as affecting the dpdk package
rather than the charm
** Also affects: dpdk (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
I think this is a packaging bug
** Also affects: designate (Ubuntu)
Importance: Undecided
Status: New
** Changed in: charm-designate
Status: Triaged => Invalid
** Changed in: charm-designate
Assignee: Liam Young (gnuoy) => (unassigned)
--
You received th
stack-dashboard (3:15.0.0-0ubuntu1~cloud0) ...", thanks.
** Changed in: charm-openstack-dashboard
Assignee: Liam Young (gnuoy) => (unassigned)
** Changed in: charm-openstack-dashboard
Status: New => Incomplete
--
You received this bug notification because you are a membe
** Changed in: charm-openstack-dashboard
Assignee: (unassigned) => Liam Young (gnuoy)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832075
Title:
[19.04][Queens -> Rocky] python3-pymy
The package from rocky-proposed worked for me. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1~cloud0
Candidate: 0.26.1-0ubuntu2.1~cloud0
Version table:
*** 0.26.1-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
bionic-proposed/rocky
The cosmic package worked for me to. Version info below:
python3-glance-store:
Installed: 0.26.1-0ubuntu2.1
Candidate: 0.26.1-0ubuntu2.1
Version table:
*** 0.26.1-0ubuntu2.1 500
500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64
Packages
100 /var/lib/dpkg/s
The disco package worked for me to. Version info below:
# apt-cache policy python3-glance-store
python3-glance-store:
Installed: 0.28.0-0ubuntu1.1
Candidate: 0.28.0-0ubuntu1.1
Version table:
*** 0.28.0-0ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 Pac
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive
:stein-proposed
$ openstack image create --public --file
/home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test
500 Internal Server Error: The server has ei
It does not appear to have been fixed upstream yet as this patch is
still in place at master:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
1 - 100 of 865 matches
Mail list logo