charm installation log for yoga.
unit-cinder-0: 04:31:30 INFO unit.cinder/0.juju-log Installing
['apache2', 'cinder-api', 'cinder-common', 'cinder-scheduler', 'cinder-
volume', 'gdisk', 'haproxy', 'libapache2-mod-wsgi-py3', 'librbd1',
'lsscsi', 'memcached', 'nfs-common', 'python3-cinder',
'python3
charm build log for yoga
> ___ summary
>
> build: commands succeeded
> congratulations :)
** Attachment added: "charm-build_yoga.log"
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+attachment/5813627/+file
charm installation log for zed.
> unit-cinder-0: 02:22:39 INFO unit.cinder/0.juju-log Installing
['apache2', 'cinder-api', 'cinder-common', 'cinder-scheduler', 'cinder-
volume', 'gdisk', 'haproxy', 'libapache2-mod-wsgi-py3', 'librbd1',
'lsscsi', 'memcached', 'nfs-common', 'python3-cinder',
'python
charm build log for zed
> ___ summary
>
> build: commands succeeded
> congratulations :)
** Attachment added: "charm-build_zed.log"
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+attachment/5813562/+files/
** Changed in: charm-cinder-purestorage
Status: Invalid => Fix Released
** Also affects: charm-cinder
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1
** Also affects: charm-cinder
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390
Title:
Missing dependency: lsscsi
To manage notifications about this
The test case in the description succeeded for both the GA kernel and
HWE kernel for jammy.
[GA kernel]
ubuntu@rtslib-fb-sru-testing-ga:~$ apt policy python3-rtslib-fb
python3-rtslib-fb:
Installed: 2.1.74-0ubuntu4.1
Candidate: 2.1.74-0ubuntu4.1
Version table:
*** 2.1.74-0ubuntu4.1 500
I personally prefer all Ubuntu related supported version to go to:
https://ubuntu.com/about/release-cycle
That would make a refresh process easier since the product management
team can check and update those in a centralized place.
Maybe is it worthy to file an issue to ubuntu.com and/or reach ou
> Do you use wpa_supplicant or iwd on that system?
I'm with wpa_supplicant (default). And netplan status says the Wifi
connection is up so not sure the "No WiFi" line is the root cause or
just a red-herring.
● 3: wlp2s0 wifi UP (NetworkManager: NM-94eee488-50b3-42db-8b93-cc8d7dcad210)
MAC
There is no feedback in the UI anywhere. The line was from journalctl
> May 05 21:57:22 t14 geoclue[71430]: Failed to query location: No WiFi
> networks found
and nothing happens.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https:
Public bug reported:
I'm aware that the underlying service is going to be retired as covered by:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/2062178
However, the service is still active as of writing but somehow GNOME
desktop env cannot determine the timezone. It's worth n
It's no longer reproducible at least with linux-image-6.8.0-31-generic,
closing.
** Changed in: linux (Ubuntu)
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387
** Summary changed:
- Fail to suspend/resume for the second time
+ [T14 Gen 3 AMD] Fail to suspend/resume for the second time
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387
Title:
[T14 Gen 3
** Attachment added: "curtin-install-cfg.yaml"
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760167/+files/curtin-install-cfg.yaml
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchp
It's worth noting that those files contain some MAAS token.
** Attachment added: "curtin-install.log"
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760166/+files/curtin-install.log
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Public bug reported:
Installed: 4.5.6-0ubuntu1~22.04.2
When a server was provisioned by MAAS, troubleshooting of the
installation process or configuration issue require the logs of the
curtin process.
It's usually stored in /root
# ll -h /root/curtin-install*
-r 1 root root 5.4K Mar 28
** Description changed:
- python-rtslib-fb needs to properly handle the new kernel module
- attribute cpus_allowed_list.
+ [ Impact ]
+
+ * getting information about "attached_luns" fails via python3-rtslib-fb
+ when running the HWE kernel on jammy due to the new kernel module
+ attribute cpus_al
Ceph-iSCSI is a bit complicated example as a reproducer
https://docs.ceph.com/en/quincy/rbd/iscsi-overview/
But the simplest reproducer is `targetctl clear` with jammy HWE kernel.
$ sudo targetctl clear
Traceback (most recent call last):
File "/usr/bin/targetctl", line 82, in
main()
File
The workaround is to switch back to GA kernel (v5.15), but it's far from
ideal to be used for newer generation of servers (less than two years
old).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/198836
The latest LTS (jammy) is missing this patch, and causes a failure in
LUN operations when the host is running the HWE kernel, v6.5.
python3-rtslib-fb | 2.1.74-0ubuntu4 | jammy | all
python3-rtslib-fb | 2.1.74-0ubuntu5 | mantic | all
python3-rtslib-fb | 2.1.7
It's not the apt-news nor esm-cache service that was modified.
It looks like systemd warns about daemon-reload in any cases if any of the
systemd unit files are modified and daemon-reload wasn't called after that.
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/comme
Random pointers although I'm not sure those are identical to my issue:
https://www.reddit.com/r/archlinux/comments/199am0a/thinkpad_t14_suspend_broken_in_kernel_670/
https://discussion.fedoraproject.org/t/random-resume-after-suspend-issue-on-thinkpad-t14s-amd-gen3-radeon-680m-ryzen-7/103452
--
Yo
Multiple suspends in a row worked without an external monitor connected,
but after connecting it the machine failed to suspend/resume.
** Attachment added: "failed_on_suspend_after_connecting_monitor.log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+attachment/5753543/+files/f
kernel log when trying suspend/resume twice in a row. The machine got
frozen while the power LED is still on in the second suspend and there
is no second "PM: suspend entry (s2idle)" in the kernel log.
** Attachment added: "failed_on_second_suspend.log"
https://bugs.launchpad.net/ubuntu/+sourc
Public bug reported:
I had a similar issue before:
https://bugs.launchpad.net/ubuntu/+source/linux-hwe-5.19/+bug/2007718
However, I haven't seen the issue with later kernels until getting
6.8.0-11.11+1 recently.
* 6.8.0-11 - fails to suspend/resume for the second time although the first
suspend
Hmm, it happened again between those two `apt update`. It might be snapd
related.
2024-03-05T10:49:54.513356+09:00 t14 sudo: nobuto : TTY=pts/0 ;
PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update
2024-03-05T11:00:47.422897+09:00 t14 sudo: nobuto : TTY=pts/0 ;
PWD=/home/nobuto ; USER
The list of files modified in the last two hours (if I increase the
range to the last 2 days, it lists almost everything).
$ find /etc/systemd /lib/systemd/ -mmin -7200
/etc/systemd/system
/etc/systemd/system/snap-chromium-2768.mount
/etc/systemd/system/snap-hugo-18726.mount
/etc/systemd/system/sn
Just for completeness.
$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of
esm-cache.service changed on disk. Run 'syst
> @nobotu - was yours really an empty file or did you not copy more than
one?
Are you referring to the `systemctl cat apt-news.service` in the bug
description? If so, my apologies. I just pasted the file line of the
content on purpose just for confirming the full path of the service. The
flie wasn
** Description changed:
I recently started seeing the following warning messages when I run `apt
update`.
$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The u
I tried to minimize the test case but no luck so far. I will report it
back whenever I find something additional.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239
Title:
Warning: The unit file,
It was puzzling indeed, but now I have a reproduction step.
$ sudo apt update
-> no warning
$ sudo apt upgrade
-> to install something to invoke the rsyslog trigger.
Processing triggers for rsyslog (8.2312.0-3ubuntu3) ...
Warning: The unit file, source configuration file or drop-ins of
rsyslog.
Public bug reported:
I recently started seeing the following warning messages when I run `apt
update`.
$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, sourc
To accommodate the upstream change, we need backporting down to
Victoria.
os-brick (master=)$ git branch -r --contains
fc6ca22bdb955137d97cb9bcfc84104426e53842
origin/HEAD -> origin/master
origin/master
origin/stable/victoria
origin/stable/wallaby
origin/stable/xena
origin/stable/yoga
Thank you Stefan for the prompt response. I'm marking this as Invalid
for the time being assuming the value was intended.
** Changed in: linux-kvm (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
-kvm flavor has CONFIG_NR_CPUS=64 although -generic has
CONFIG_NR_CPUS=8192 these days.
It will be a problem especially when launching a VM on top of a
hypervisor with more than 64 CPU threads available. Then the guest can
only use up to 64 vCPUs even when more vCPUs are allo
In this specific case (the environment Olivier described), we tested
focal-xena and the issue was NOT reproducible. We've decided to go with
Xena so field-high can be dropped (I'm not able to remove the
subscription by myself here).
Assuming that it might be focal-wallaby specific since we haven't
** Project changed: networking-ovn => ovn (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1963698
Title:
ovn-controller on Wallaby creates high CPU usage after moving port
To manage notifica
Tested and verified with cloud-archive:ussuri-proposed.
apt-cache policy cinder-common
cinder-common:
Installed: 2:16.4.2-0ubuntu2~cloud0
Candidate: 2:16.4.2-0ubuntu2~cloud0
Version table:
*** 2:16.4.2-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
bionic-
Tested and verified with cloud-archive:victoria-proposed.
apt-cache policy cinder-common
cinder-common:
Installed: 2:17.2.0-0ubuntu1~cloud1
Candidate: 2:17.2.0-0ubuntu1~cloud1
Version table:
*** 2:17.2.0-0ubuntu1~cloud1 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu
focal
Tested and verified with focal-proposed.
apt-cache policy cinder-common
cinder-common:
Installed: 2:16.4.2-0ubuntu2
Candidate: 2:16.4.2-0ubuntu2
Version table:
*** 2:16.4.2-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg
There is a separate bug for `lsscsi` since it's impenitent to iSCSI use cases:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1939390
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/19470
Okay, I've added a comment there:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390
Title:
Missing dependency: lsscsi
To m
Upstream refreshed the list of dependencies by adding more commands,
etc. "nvme" command from nvme-cli package is one of them.
This is a warning in the NVMe-oF code path, but it's invoked regardless whether
NVMe-oF is used or not.
2022-02-22 11:00:42.531 713772 WARNING os_brick.initiator.connec
> I *think* we also had this problem on systems that had NVMe volumes.
The nvme-cli package is not pulled in, even though it is used by os-
brick:
Did it block any operation by missing the nvme command? It looks like
it's in a critical path for NVMe-oF usecase, but it generates a warning
instead o
Hi Raghavendra,
First of all, thank you for your effort tying to make things forward.
I'm afraid devstack works in this specific case because devstack pulls
Cinder from git repository directly instead of using Ubuntu's binary
packages (.deb basically) if I'm not mistaken. This validation requires
Subscribing ~field-medium
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390
Title:
Missing dependency: lsscsi
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova
Similar with this one: https://bugs.launchpad.net/ubuntu/+source/python-
os-brick/+bug/1947063
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390
Title:
Missing dependency: lsscsi
To manage noti
Adding Ubuntu packaging task. It seems lsscsi dependency has been added fairly
recently (July 2020) so it looks like it's something os-brick binary package
should install as a dependency:
https://bugs.launchpad.net/os-brick/+bug/1793259
https://opendev.org/openstack/os-brick/commit/fc6ca22bdb9551
> TL;DR: pipewire-pulse and PulseAudio should *not* be installed at the
same time as they serve the same function and applications don't know
the difference.
This is not the case at least for the default Ubuntu flavor (w/ GNOME).
$ curl -s
https://cdimages.ubuntu.com/daily-live/current/jammy-des
I can confirm that after stopping pipewire temporarily by:
$ systemctl --user stop pipewire.socket pipewire.service
The volume level is properly recovered across plugging in and out a headset for
example, which is good.
Both pulseaudio and pipewire are installed out of the box and running if
I'm
** Description changed:
[Description]
OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE
Primera 4.2 and higher. This is now supported in Cinder and we would like to
enable it in Ubuntu Focal as well as OpenStack Ussuri.
The rationale for this SRU falls under
** Description changed:
[Description]
OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE
Primera 4.2 and higher. This is now supported in Cinder and we would like to
enable it in Ubuntu Focal as well as OpenStack Ussuri.
The rationale for this SRU falls under
Let me know what log / log level you want to see to compare. I'm
attaching the machine log of the VM for the time being.
** Attachment added: "machine-0.log"
https://bugs.launchpad.net/juju/+bug/1936842/+attachment/5533786/+files/machine-0.log
--
You received this bug notification because yo
Hmm, I'm not sure where the difference comes from. With Juju 2.9.16 I
still see mtu=1442 on VM NIC (expected) and mtu=1450 (bigger than
underlying NIC) on fan-252 bridge.
ubuntu@juju-913ba4-k8s-on-openstack-0:~$ brctl show
bridge name bridge id STP enabled interfaces
fan-252
Public bug reported:
At the moment, python3-os-brick pulls iSCSI dependency such as open-
iscsi but doesn't pull FC dependency as sysfsutils at all.
os-brick is actively using the "systool" command to detect HBA and bais if it's
not installed. It would be nice to add sysfsutils package at least
** Description changed:
- Ubuntu 20.04 LTS
- dpdk 19.11.7-0ubuntu0.20.04.1
+ - Ubuntu 20.04 LTS
+ - dpdk 19.11.7-0ubuntu0.20.04.1
+ (we tested it with 19.11.10~rc1, but the problem persists)
+ - Intel XXV710
+ - Cisco 25G AOC cables
- We are seeing issues with link status of ports as DPDK-bon
** Summary changed:
- i40e: support 25G AOC/ACC cables
+ DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e)
and 25G AOC cables
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/19
A test build for testing:
https://launchpad.net/~nobuto/+archive/ubuntu/dpdk
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940957
Title:
i40e: support 25G AOC/ACC cables
To manage notifications ab
Public bug reported:
Ubuntu 20.04 LTS
dpdk 19.11.7-0ubuntu0.20.04.1
We are seeing issues with link status of ports as DPDK-bond members and those
links suddenly go away and marked as down. There are multiple parameters that
could cause this issue, but one of the suggestions we've got from a ser
A deployment method improvement in the field will be tracked as a
private bug LP: #1889498.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939898
Title:
Unnatended postgresql-12 upgrade caused MAAS
Closing MAAS task since MAAS just connects to PostgreSQL through tcp
connections.
One correction to my previous statement:
$ sudo lsof / | grep plpgsql.so
postgres 21822 postgres memREG 252,1 202824 1295136
/usr/lib/postgresql/12/lib/plpgsql.so
postgres 21948 postgres
The only scenario I can think of is NOT restarting postgres after the
package update. This could happen when postgres process is managed
outside of init(systemd) such as pacemaker, etc. for HA purposes.
$ sudo lsof / | grep plpgsql.so
postgres 21822 postgres DELREG 252,1 12
From a duplicate of this bug, as tldr:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
> > > Does that mean that enabling it, would only add some dependencies but
> > > not actually do anything?
> >
> > Yes, a (soft) depen
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
> > Does that mean that enabling it, would only add some dependencies but
> > not actually do anything?
>
> Yes, a (soft) dependency should probably be added against
> gstreamer1.0-plugins-bad, but as I said, the needed version (>= 1.19) i
** Bug watch added: Debian Bug tracker #991597
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
** Also affects: pulseaudio (Debian) via
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
Importance: Unknown
Status: Unknown
--
You received this bug notification beca
Public bug reported:
The changelog mentions AptX, but it's not actually enabled in the build
if I'm not mistaken. Aptx support seems to require gstreamer in the
build dependency at least.
[changelog]
pulseaudio (1:15.0+dfsg1-1ubuntu1) impish; urgency=medium
* New upstream version resynchronize
Previously reported as
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1904745
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580
Title:
Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are t
root@casual-condor:/var/lib/nova# ll .ssh/
total 28
drwxr-xr-x 2 nova root 4096 Aug 3 10:43 ./
drwxr-xr-x 10 nova nova 4096 Aug 3 10:25 ../
-rw-r--r-- 1 root root 1197 Aug 3 10:54 authorized_keys
-rw--- 1 nova root 1823 Aug 3 10:25 id_rsa
-rw-r--r-- 1 nova root 400 Aug 3 10:25 id_rsa.
> Charms were not upgraded while this broke. We simply upgrade the
packages.
If that's the case, package maintainer script might be related? For
example,
$ grep /var/lib/nova /var/lib/dpkg/info/nova-common.postinst
--home /var/lib/nova \
chown -R nova:nova /var/lib/nova/
f
[focal-victoria]
All of the uploads succeeded. And -proposed shortened time for the
larger sizes.
$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.3.0-0ubuntu1~cloud1
Cand
[bionic-ussuri]
All of the uploads succeeded. And -proposed shortened time for the
larger sizes.
$ sudo apt-get install python3-glance-store/bionic-proposed
$ sudo systemctl restart glance-api
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.0.0-0ubuntu2~cloud0
Cand
Just for the record, this is the current status with focal-victoria. No
diff between -updates and -proposed.
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.3.0-0ubuntu1~cloud0
Candidate: 2.3.0-0ubuntu1~cloud0
Version table:
*** 2.3.0-0ubuntu1~cloud0 500
5
@Corey,
Somehow the binary package for cloud-archive:victoria-proposed is not
published yet. Can you please double-check the build status of the
package? I just don't know where to look.
cloud1 in the source vs cloud0 in the binary.
$ curl -s
http://ubuntu-cloud.archive.canonical.com/ubuntu/di
[focal-wallaby]
All of the uploads succeeded. And -proposed shortened time for the
larger sizes.
$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.5.0-0ubuntu2~cloud0
Candi
[focal]
All of the uploads succeeded. And -proposed shortened time for the
larger sizes.
$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.0.0-0ubuntu2
Candidate: 2.0.0-0ub
[hirsute]
All of the uploads succeeded. And -proposed shortened time for the
larger sizes.
$ sudo apt-get install python3-glance-store/hirsute-proposed
$ sudo systemctl restart glance-api
$ apt-cache policy python3-glance-store
python3-glance-store:
Installed: 2.5.0-0ubuntu2
Candidate: 2.5.0
My update in the bug description was somehow rolled back (by me in the
record), trying again.
** Description changed:
[Impact]
- [Test Case]
- I have a test Ceph cluster as an object storage with both Swift and S3
protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an
** Description changed:
[Impact]
-
- Glance with S3 backend cannot accept image uploads in a realistic time
- frame. For example, an 1GB image upload takes ~60 minutes although other
- backends such as swift can complete it with 10 seconds.
-
- [Test Plan]
-
- 1. Deploy a partial OpenStack wi
** Description changed:
[Impact]
- [Test Case]
- I have a test Ceph cluster as an object storage with both Swift and S3
protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an
image upload completes quickly enough. But with S3 backend Glance, it takes
much more time to
It's likely iputils-arping.
$ apt rdepends arping
arping
Reverse Depends:
Conflicts: iputils-arping
Depends: netconsole
Depends: ifupdown-extra
$ apt rdepends iputils-arping
iputils-arping
Reverse Depends:
Depends: neutron-l3-agent
Recommends: python3-networking-arista
Recommends: neu
On focal, there are two packages offering arping binary:
[iputils-arping(main)]
$ sudo arping -U -I eth0 -c 1 -w 1.5 10.48.98.1
arping: invalid argument: '1.5'
[arping(universe)]
$ arping -U -I eth0 -c 1 -w 1.5 10.48.98.1
ARPING 10.48.98.1
I don't know which one our charms install.
--
You rece
Subscribing Canonical's ~field-high to initiate the Ubuntu package's SRU
process in a timely manner.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849
Title:
s3 backend takes time exponentially
> I *think* hash calculation and verifier have to be outside of the loop
to avoid the overhead. I will confirm it with a manual testing.
This hypothesis wasn't true, it was really about the chunk size.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscr
I *think* hash calculation and verifier have to be outside of the loop
to avoid the overhead. I will confirm it with a manual testing.
for chunk in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE):
image_data += chunk
image_size += len(chunk)
os_has
Yeah, I put the same config on purpose for both s3 and swift. But
tweaking large_object_size didn't make any difference.
[swift]
large_object_size = 5120
large_object_chunk_size = 200
[s3]
s3_store_large_object_size = 5120
s3_store_large_object_chunk_size = 200
After digging into the actual envi
And by using "4 * units.Mi" it can be 20s.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849
Title:
s3 backend takes time exponentially
To manage notifications about this bug go to:
https://bug
Okay, as the utils.chunkreadable loop is taking time I've tried a larger
WRITE_CHUNKSIZE by hand. It can decrease the amount of time of uploading
a 512MB image from 14 minutes to 60 seconds.
$ git diff
diff --git a/glance_store/_drivers/s3.py b/glance_store/_drivers/s3.py
index 1c18531..576c573 10
The code part in question is this for loop:
https://opendev.org/openstack/glance_store/src/branch/stable/ussuri/glance_store/_drivers/s3.py#L638-L644
2021-07-07 11:50:06.735 - def _add_singlepart
2021-07-07 11:50:06.736 - getting into utils.chunkreadable loop
2021-07-07 11:50:06.736 - loop invoked
S3 performance itself is not bad. Uploading 512MB object can complete
within a few seconds. So I suppose it's on how Glance S3 driver is using
boto3.
$ time python3 upload.py
real0m3.644s
user0m3.124s
sys 0m1.835s
$ cat upload.py
import boto3
s3 = boto3.client(
"s3",
endpo
Debug log of when uploading a 512MB image with S3 backend.
** Attachment added: "glance-api.log"
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1934849/+attachment/5509534/+files/glance-api.log
** Also affects: glance-store
Importance: Undecided
Status: New
--
python3-boto3 1.9.253-1
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849
Title:
s3 backend takes time exponentially
To manage notifications about this bug go to:
https://bugs.launchpad.net/gla
Public bug reported:
I have a test Ceph cluster as an object storage with both Swift and S3
protocols enabled for Glance (Ussuri). When I use Swift backend with
Glance, an image upload completes quickly enough. But with S3 backend
Glance, it takes much more time to upload an image and it seems to
Now that "snapd" snap is seeded into the base image of focal along with
core18 for "lxd" snap . That actually solves the original issue in a
different way. We no longer have to upload "snapd" snap using a charm
resource.
Bionic is still affected, but I don't think it's common for new
deployments t
Adding Ubuntu Ceph packaging task here.
30-ceph-osd.conf file is owned by ceph-osd package as follows.
$ dpkg -S /etc/sysctl.d/30-ceph-osd.conf
ceph-osd: /etc/sysctl.d/30-ceph-osd.conf
However, as far as I see in 15.2.8-0ubuntu0.20.04.1/focal there is no
place in /var/lib/dpkg/info/ceph-osd.post
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635
> 64065 | gnocchi | Gnocchi - Metric as a Service
** Bug watch added: Debian Bug tracker #983635
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635
--
You received this bug notification because you are a member of Ub
Here is the current maintainer code:
https://git.launchpad.net/ubuntu/+source/openstack-pkg-tools/tree/pkgos_func?h=ubuntu/focal-proposed#n786
and the previous upstream bug in Debian:
https://bugs.debian.org/884178
--
You received this bug notification because you are a member of Ubuntu
Bugs, wh
Excuse me for reviving an old bug report, but Gnocchi also requires a
static uid/gid to support NFS use case.
https://gnocchi.xyz/intro.html
> If you need to scale the number of server with the file driver, you can
> export and share the data via NFS among all Gnocchi processes.
** Also affects:
Initially we thought we were hit by
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1910201
But it looks like some patches are already in focal GA kernel like
https://kernel.ubuntu.com/git/ubuntu/ubuntu-focal.git/commit/?id=d256617be44956fe4f048295a71b31d44d9104d9
--
You received this bug n
** Summary changed:
- snap installation with core18 fails at 'Ensure prerequisites for "etcd" are
available' in air-gapped environments as snapd always requires core(16)
+ snap installation with core18 fails at 'Ensure prerequisites for "etcd" are
available' in air-gapped environments as snapd a
1 - 100 of 1184 matches
Mail list logo