I made this mistake originally, too…

 

It’s not real clear in the documentation, but it turns out that if you just 
initialize your journal drives as GPT, but don’t create the partitions, and 
then prepare your OSDs with:

 

$ ceph-deploy osd prepare node1:sde:sda

 

(ie, specify the device, not an individual partition)

 

then it will create a new partition (sized according to the osd_journal_size 
setting under [osd] in ceph.conf), and will link to it by UUID.

 

Regards,

Thomas.

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Scott 
Laird
Sent: Thursday, 22 May 2014 8:19 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Feature request: stable naming for external journals

 

I recently created a few OSDs with journals on a partitioned SSD.  Example:

$ ceph-deploy osd prepare v2:sde:sda8 

It worked fine at first, but after rebooting, the new OSD failed to start.  I 
discovered that the journal drive had been renamed from /dev/sda to /dev/sdc, 
so the journal symlink in /var/lib/ceph/osd/ceph-XX no longer pointed to the 
correct block device.

I have a couple requests/suggestions:

1.  Make this clearer in the logs.  I've seen at least a couple cases where a 
simple "Unable to open journal" message would have saved me a bunch of time.

2.  Consider some method of generating more stable journal names under the 
hood.  I'm using /dev/disk/by-id/... under Ubuntu, but that's probably not 
generally portable.  I've been tempted to put a filesystem on my journal 
devices, mount it by UUID, and then symlink to a file on the mounted device.  
It's not as fast, but at least it'd have a stable name.

(This was caused by adding an SSD and then moving / onto it; during the reboots 
needed for migrating /, drive ordering changed several times.  It probably 
wouldn't have happened if I'd started with hardware bought new and dedicated to 
Ceph)

 

Scott

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to