Hi guys,
I'm trying to setup a cluster with encryption on osd data and journal.
To do that I use ceph-deploy with this 2 options --dmcrypt
--dmcrypt-key-dir on /dev/sdc disk.
Disk state before the prepare ceph-deploy command :
root@ceph-osd-1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
s
x27;s going on in the cluster, what kind of action is
running ?
2015-12-18 14:50 GMT+01:00 Reno Rainz :
> Hi all,
>
> I reboot all my osd node after, I got some pg stuck in peering state.
>
> root@ceph-osd-3:/var/log/ceph# ceph -s
> cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
&g
approximately 1 OSD worth of pgs stuck (i.e. 264 / 8), and osd.0
> appears in each of the stuck pgs, alongside either osd.2 or osd.3.
>
> I'd start by checking the comms between osd.0 and osds 2 and 3 (including
> the MTU).
>
> Cheers,
>
> Chris
>
>
> On Fri, De
Hi all,
I reboot all my osd node after, I got some pg stuck in peering state.
root@ceph-osd-3:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_WARN
clock skew detected on mon.ceph-osd-2
33 pgs peering
33 pgs stuck inact
], p2) acting ([2,6,0], p2)
root@ceph-osd-1:~#
Could you explain me why ?
Best,
2015-12-14 21:37 GMT+01:00 Samuel Just :
> You most likely have pool size set to 3, but your crush rule requires
> replicas to be separated across DCs, of which you have only 2.
> -Sam
>
> On Mo
Thank you for your answer, but I don't really understand what do you mean.
I use this map to distribute replicat into 2 differents dc, but I don't
know where the mistake is.
Le 14 déc. 2015 7:56 PM, "Samuel Just" a écrit :
> 2 datacenters.
> -Sam
>
> On Mon, Dec
Hi,
I got a functionnal and operationnal ceph cluster (in version 0.94.5),
with 3 nodes (acting for MON and OSD), everything was fine.
I added a 4th osd node (same configuration than 3 others) and now cluster
status is health warn (active+remapped).
cluster e821c68f-995c-41a9-9c46-dbbd0a28