Hi !
I am not 100% sure, but i think, --net=host does not propagate /dev/
inside the conatiner.
From the Error Message :
2019-04-18 07:30:06 /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !
I whould say, you should add something like
Hi !
We have now successfully upgraded (from 12.2.10) to 12.2.11.
Seems to be quite stable. (Using RBD, CephFS and RadosGW)
Most of our OSDs are still on Filestore.
Should we set the "pglog_hardlimit" (as it mus not be unset anymore) ?
What exactly will this limit ?
Are there any risks ?
An
Hi !
We are running a ceph 12.2.7 Cluster and use it for RBDs.
We have now a few new servers installed with Ubuntu 18.
The default kernel version is v4.15.0.
When we create a new rbd and map/xfs-format/mount it, everything looks fine.
But if we want to map/mount a rbd that has already data in i
estion ? Or anyother way to "clean" this pg ?
We have searched a lot in the mail archives but couldnt find anything
that could help us in that case.
Br,
Am 17.05.2018 um 00:12 schrieb Gregory Farnum:
On Wed, May 16, 2018 at 6:49 AM Siegfried Höllrigl
<mailto:siegfried.hoellr...@
Am 17.05.2018 um 00:12 schrieb Gregory Farnum:
I'm a bit confused. Are you saying that
1) the ceph-objectstore-tool you pasted there successfully removed pg
5.9b from osd.130 (as it appears), AND
Yes. The process ceph-osd for osd.130 was not runnin in that phase.
2) pg 5.9b was active with on
Hi !
We have upgraded our Ceph cluster (3 Mon Servers, 9 OSD Servers, 190
OSDs total) From 10.2.10 to Ceph 12.2.4 and then to 12.2.5.
(A mixture of Ubuntu 14 and 16 with the Repos from
https://download.ceph.com/debian-luminous/)
Now we have the Problem that One ODS is crashing again and aga