Re: [ceph-users] Drive replacement procedure

2013-06-25 Thread Dave Spano
ot; Cc: ceph-users@lists.ceph.com Sent: Monday, June 24, 2013 7:41:02 PM Subject: Re: [ceph-users] Drive replacement procedure That's where 'ceph osd set noout' comes in handy. On Jun 24, 2013, at 7:28 PM, Nigel Williams wrote: > On 25/06/2013 5:59 AM, Brian Candler wrote: &g

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Michael Lowe
That's where 'ceph osd set noout' comes in handy. On Jun 24, 2013, at 7:28 PM, Nigel Williams wrote: > On 25/06/2013 5:59 AM, Brian Candler wrote: >> On 24/06/2013 20:27, Dave Spano wrote: >>> Here's my procedure for manually adding OSDs. > > The other thing I discovered is not to wait betwee

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Nigel Williams
On 25/06/2013 5:59 AM, Brian Candler wrote: On 24/06/2013 20:27, Dave Spano wrote: Here's my procedure for manually adding OSDs. The other thing I discovered is not to wait between steps; some changes result in a new crushmap, that then triggers replication. You want to speed through the step

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Brian Candler
On 24/06/2013 20:27, Dave Spano wrote: If you remove the OSD after it fails from the cluster and the crushmap, the cluster will automatically re-assign that number to the new OSD when you run ceph osd create with no arguments. OK - although obviously if you're going to make a disk with a label

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Dave Spano
ost=ha1 Check to make sure it's added. ceph osd tree Start up the new osd, and let it sync with the cluster. service ceph start osd.1 Dave Spano Optogenics - Original Message - From: "Brian Candler" To: "John Nielsen" Cc: ceph-users@lists.ceph.com Sen

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread Brian Candler
On 24/06/2013 18:41, John Nielsen wrote: The official documentation is maybe not %100 idiot-proof, but it is step-by-step: http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ If you lose a disk you want to remove the OSD associated with it. This will trigger a data migration so you a

Re: [ceph-users] Drive replacement procedure

2013-06-24 Thread John Nielsen
On Jun 24, 2013, at 11:22 AM, Brian Candler wrote: > I'm just finding my way around the Ceph documentation. What I'm hoping to > build are servers with 24 data disks and one O/S disk. From what I've read, > the recommended configuration is to run 24 separate OSDs (or 23 if I have a > separate

[ceph-users] Drive replacement procedure

2013-06-24 Thread Brian Candler
I'm just finding my way around the Ceph documentation. What I'm hoping to build are servers with 24 data disks and one O/S disk. From what I've read, the recommended configuration is to run 24 separate OSDs (or 23 if I have a separate journal disk/SSD), and not have any sort of in-server RAID.