Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Siegfried Höllrigl
Hi ! I am not 100% sure, but i think, --net=host does not propagate /dev/ inside the conatiner. From the Error Message : 2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The device pointed by OSD_DEVICE (/dev/vdd) doesn't exist ! I whould say, you should add something like

Re: [ceph-users] v12.2.11 Luminous released

2019-02-13 Thread Siegfried Höllrigl
Hi ! We have now successfully upgraded (from 12.2.10) to 12.2.11. Seems to be quite stable. (Using RBD, CephFS and RadosGW) Most of our OSDs are still on Filestore. Should we set the "pglog_hardlimit" (as it mus not be unset anymore) ? What exactly will this limit ? Are there any risks ? An

[ceph-users] Ubuntu18 and RBD Kernel Module

2018-08-28 Thread Siegfried Höllrigl
Hi ! We are running a ceph 12.2.7 Cluster and use it for RBDs. We have now a few new servers installed with Ubuntu 18. The default kernel version is v4.15.0. When we create a new rbd and map/xfs-format/mount it, everything looks fine. But if we want to map/mount a rbd that has already data in i

Re: [ceph-users] Ceph Luminous - OSD constantly crashing caused by corrupted placement group

2018-05-23 Thread Siegfried Höllrigl
estion ? Or anyother way to "clean" this pg ? We have searched a lot in the mail archives but couldnt find anything that could help us in that case. Br, Am 17.05.2018 um 00:12 schrieb Gregory Farnum: On Wed, May 16, 2018 at 6:49 AM Siegfried Höllrigl <mailto:siegfried.hoellr...@

Re: [ceph-users] Ceph Luminous - OSD constantly crashing caused by corrupted placement group

2018-05-16 Thread Siegfried Höllrigl
Am 17.05.2018 um 00:12 schrieb Gregory Farnum: I'm a bit confused. Are you saying that 1) the ceph-objectstore-tool you pasted there successfully removed pg 5.9b from osd.130 (as it appears), AND Yes. The process ceph-osd for osd.130 was not runnin in that phase. 2) pg 5.9b was active with on

[ceph-users] Ceph Luminous - OSD constantly crashing caused by corrupted placement group

2018-05-15 Thread Siegfried Höllrigl
Hi ! We have upgraded our Ceph cluster (3 Mon Servers, 9 OSD Servers, 190 OSDs total) From 10.2.10 to Ceph 12.2.4 and then to 12.2.5. (A mixture of Ubuntu 14 and 16 with the Repos from https://download.ceph.com/debian-luminous/) Now we have the Problem that One ODS is crashing again and aga