Some context. I have a small cluster running ubuntu 14.04 and giant ( now hsmmer). I ran some updates everything was fine. Rebooted a node and a drive must have failed as it no longer shows up.
I use --dmcrypt with ceph deploy and 5 osds per ssd journal.  To do this I 
created the ssd partitions already and pointed ceph-deploy towards the 
partition for the journal.
This worked in giant without issue (I was able to zap the osd and redeploy 
using the same journal all of the time).  Now it seems to fail in hammer 
stating that the partition  exists and im using - - decrypt.
This raises a few questions.

1.) ceph osd start scripts must have a list of dm-crypt keys and uuids somewhere as the init mounts the drives. Is this accessible? Normally outside of ceph I've used crypt tab, how is ceph doing it?
2.) my ceph-deploy line is:
ceph-deploy osd --dmcrypt create ${host}:/dev/drive:/dev/journal_partition

I see that a variable in ceph-disk exists in and is set to false. Is this what I would need to change to get this working again? Or is this set to false for a reason?
3.) I see multiple references to journal_uuid in Sebastian Hans blog as 
well as the mailing list when replacing a disk.  I don't have this file, 
and I'm assuming it's due to the - - dmcrypt flag.  I also see 60 
dmcrypt-keys in /etc/ceph/dmxrypt-keys but only 30 mapped devices.  Are the 
journals not using these keys at all?




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to