I just found out that the limit is not applied after a reboot, it
appears that this only happens when the service is (re-)started manually
from the shell.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.laun
Public bug reported:
This log is from installing the pre-release packages, but I got the same
warnings when installing 10.0.5 earlier:
Preparing to unpack
.../ceph_10.1.0.dfsg-0ubuntu1~ubuntu16.04.1~ppa201603311201_amd64.deb ...
deb-systemd-invoke only supports /usr/sbin/policy-rc.d return code
Public bug reported:
When started via systemd, there is a default limit of 512 Tasks, and it
seems that each thread counts as a different task:
# systemctl status ceph-osd@2
>From a hardening perspective, it certainly hurts having unneeded
packages installed, so please re-raise the priority of this.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to cinder in Ubuntu.
https://bugs.launchpad.net/bugs/1079466
Ti
Public bug reported:
Cinder, Neutron and Nova use rootwrappers that allow selected commands
to be executed with root privileges via sudo. If an adminstrator chooses
to enable sudo logging for security reasons, this will cause a lot of
files being created, leading to filled up file systems pretty f
Thanks to some help in #systemd I could find the cause: On the affected
systems libpam-systemd was not installed. So maybe it would make sensu
to turn this into a stronger dependency than "recommended", at least in
combination with openssh-server.
--
You received this bug notification because you
Hmm, on a cloud instance this looks different, even when logged in
multiple time, the output only shows the master process:
# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset:
enabled)
Active: active (ru
Do your sleep processes show up in the output of "systemctl status
ssh.service" in the CGroup section? For me they do (sample with just one
process backgrounded):
# systemctl status ssh.service
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled;
How did you install ceph and which version exactly? Running as ceph
should only happen with >= 9.2, which is available on Xenial.
If I install ceph=10.0.3-0ubuntu1 on a new machine, /var/lib/ceph and
/var/run/ceph have ceph:ceph as owner, which looks fine to me. One could
discuss the ownership of
Public bug reported:
This may be useful for an unexperienced user trying to run ceph on a
small setup, but for an automated deployment of a ceph cluster, it is
pretty annoying that there may be daemons trying to create credentials
that will allow access to the whole cluster if only the new machine
IMHO this is a bug for ceph-deploy, which is a separate project from
ceph itself. It should be solved with a new upstream version, probably
1.5.29, see also bug #1550853.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
h
Public bug reported:
This only affects the xenial package, in /lib/systemd/system/ceph-
osd@.service there is
ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER}
--id %i --setuser ceph --setgroup ceph
where the options were just copied from the ExecStart but this triggers
errors
Public bug reported:
If the config for the dashboard or apache2 in general is not working,
the removal of openstack-dashboard-ubuntu-theme fails because there is
no graceful error handling in the postrm script. E.g. I am getting:
# apt-get -q -y purge openstack-dashboard-ubuntu-theme
STDOUT:
Function ro_partition in /usr/share/os-prober/common.sh makes a
partition read-only. This is called in /usr/lib/os-probes/50mounted-
tests before trying to mount the partition being probed with all
possible fs-types. Obviously this will break all other processes trying
to write to this partition at
Forgot to mention that the Ceph cluster has to be under write load in
order to reproduce, i.e. running something like
rados -p rbd bench 600 write -t 1 --show-time --run-length 60
There is no effect of running os-prober if the cluster is idle. Based
with that information, though, I can also repro
** Also affects: os-prober (Ubuntu)
Importance: Undecided
Status: New
** Changed in: os-prober (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.ne
I can reproduce this on Wily with
# apt-cache policy ceph
ceph:
Installed: 0.94.5-0ubuntu0.15.10.1
Candidate: 0.94.5-0ubuntu0.15.10.1
Version table:
*** 0.94.5-0ubuntu0.15.10.1 0
500 http://eu.archive.ubuntu.com/ubuntu/ wily-updates/main amd64
Packages
100 /var/lib/dpkg/sta
You can set "osd pool default pg num" and "osd pool default pgp num" to
some higher value in your ceph.conf before creating pools if you want
some higher values than the default and do not want to specify it on the
command line every time.
For more complex setups however, you want to match the pg
Sorry for the delay, I must admin that I'm not sure about that anymore.
I tried reproducing this issue with a current installation of Wily, but
failed. So I guess we can assume this to be invalid now.
** Changed in: ceph (Ubuntu)
Status: Incomplete => Invalid
--
You received this bug not
Public bug reported:
There seems to be some issue with the udev triggers, either they do not
happen properly for partitions or they happen too early in the boot
process.
If I do a "ceph-disk activate /dev/sdXX" after the system has booted,
the OSD is starting just fine.
If I do a "udevadm trigge
Public bug reported:
When upgrading libvirt-bin after adding the stable/liberty cloud-ppa,
I'm getting this error:
ubuntu@jr2:~$ sudo apt-get install libvirt-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatical
I think there is some confusion here. As I understand it, the part that
was fixed in libvirt was changing the API so that now it is possible to
define a subset of block devices to be copied during migration. Now to
fix the original issue, another patch in nova will be needed, that uses
this extende
Public bug reported:
The systemd service generated from /etc/init.d/apache2 via systemd-sysv-
generator contains the line
RemainAfterExit=yes
causing systemd to ignore crashes of the service. In order to reproduce
this, add a non-existing address to /etc/apache2/ports.conf, which will
cause the
Tested
https://launchpad.net/ubuntu/+source/neutron/1:2015.1.1-0ubuntu2/+build/7782259/+files
/neutron-plugin-neutron-agent_2015.1.1-0ubuntu2_all.deb and it solves
the issue for me.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification becaus
** Also affects: python-glanceclient (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-glanceclient in Ubuntu.
https://bugs.launchpad.net/bugs/1342080
Title:
glance api is tra
I tested http://launchpadlibrarian.net/208123057/rabbitmq-
server_3.5.1-2_all.deb and it works fine for me.
It would be great to see this backported to vivid.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to rabbitmq-server in Ubuntu.
h
"wont-fixing" the nova side will leave it broken for quite some time
until the backport has made its way into all relevant distro images. I'd
prefer to add your patch into nova code as workaround for older libvirt
versions.
--
You received this bug notification because you are a member of Ubuntu
Did you define ERL_EPMD_ADDRESS in your rabbitmq-env.conf?
Then you would be hitting
https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1374109 and you
can find a possible workaround there.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscrib
*** This bug is a duplicate of bug 1447807 ***
https://bugs.launchpad.net/bugs/1447807
** This bug has been marked a duplicate of bug 1447807
systemctl enable fails to enable a SysV service
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subs
Public bug reported:
root@node10:~# systemctl enable ntp
30 matches
Mail list logo