If you remove the OSD after it fails from the cluster and the crushmap, the 
cluster will automatically re-assign that number to the new OSD when you run 
ceph osd create with no arguments. 

Here's my procedure for manually adding OSDs. This part of the documentation I 
wrote for myself, or in the event of a bus error. If anyone wants to call 
Shenanigans on my procedure, I'm always open to constructive criticism. 

Create an xfs filesystem on the whole disk. We don't need a partition because 
we're not creating multiple partitions on the OSDs. 
mkfs.xfs options: -f -i size=2048 -d su-64k for RAID0 tests 
Ex. mkfs.xfs -f -i size=2048 -d su=64k,sw=1 /dev/sdc 

Mount points use this convention /srv/ceph/osd[id], I.E. /srv/ceph/osd1 
Ex. cd /srv/ceph; mkdir osd1; 

Create a disk label, so you don't mount the wrong disk, and possbily use it 
with the wrong OSD daemon. 
Ex. xfs_admin -L osd1 /dev/sdb 
This assigns the label osd1 to /dev/sdb. 

Next we add the mount to /etc/fstab 
LABEL=osd1 /srv/ceph/osd1 xfs inode64,noatime 0 0 

Create the osd: 
ceph osd create [optional id] 
Will return next available number if nothing is specified, otherwise it will 
use the id specified. In almost all cases, it's best to let the cluster assign 
the id. 

Add the osd to the /etc/ceph/ceph.conf files. Use the by label convention to 
avoid mounting the wrong hard drive! 
Ex. 
[osd.1] 
host = ha1 
devs = /dev/disk/by-label/osd1 

Intialize the OSD's directory: 
ceph-osd -i {osd-num} --mkfs --mkkey 

Register the OSD authentication key: 
ceph auth add osd.1 osd 'allow *' mon 'allow rwx' -i 
/var/lib/ceph/osd/ceph--{osd-num}/keyring 

Add the OSD to the CRUSH map so it can receive data: 
ceph osd crush set osd.1 1.0 root=default host=ha1 

Check to make sure it's added. 
ceph osd tree 

Start up the new osd, and let it sync with the cluster. 
service ceph start osd.1 


Dave Spano 
Optogenics 


----- Original Message -----

From: "Brian Candler" <b.cand...@pobox.com> 
To: "John Nielsen" <li...@jnielsen.net> 
Cc: ceph-users@lists.ceph.com 
Sent: Monday, June 24, 2013 2:04:32 PM 
Subject: Re: [ceph-users] Drive replacement procedure 

On 24/06/2013 18:41, John Nielsen wrote: 
> The official documentation is maybe not %100 idiot-proof, but it is 
> step-by-step: 
> 
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ 
> 
> If you lose a disk you want to remove the OSD associated with it. This will 
> trigger a data migration so you are back to full redundancy as soon as it 
> finishes. Whenever you get a replacement disk, you will add an OSD for it 
> (the same as if you were adding an entirely new disk). This will also trigger 
> a data migration so the new disk will be utilized immediately. 
> 
> If you have a spare or replacement disk immediately after a disk goes bad, 
> you could maybe save some data migration by doing the removal and re-adding 
> within a short period of time, but otherwise "drive replacement" looks 
> exactly like retiring an OSD and adding a new one that happens to use the 
> same drive slot. 
That's good, thank you. So I think it's something like this: 

* Remove OSD 
* Unmount filesystem (forcibly if necessary) 
* Replace drive 
* mkfs filesystem 
* mount it on /var/lib/ceph/osd/ceph-{osd-number} 
* Start OSD 

Would you typically reuse the same OSD number? 

One other thing I'm not clear about. At 
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
 
it says to mkdir the mountpoint, mkfs and mount the filesystem. 

But at 
http://ceph.com/docs/master/start/quick-ceph-deploy/#add-osds-on-standalone-disks
 
it says to use "ceph-deploy osd prepare" and "ceph-deploy osd activate", 
or the one-step version 
"ceph-deploy osd create" 

Is ceph-deploy doing the same things? Could I make a shorter 
disk-replacement procedure which uses ceph-deploy? 

Thanks, 

Brian. 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to