ot;
Cc: ceph-users@lists.ceph.com
Sent: Monday, June 24, 2013 7:41:02 PM
Subject: Re: [ceph-users] Drive replacement procedure
That's where 'ceph osd set noout' comes in handy.
On Jun 24, 2013, at 7:28 PM, Nigel Williams wrote:
> On 25/06/2013 5:59 AM, Brian Candler wrote:
&g
That's where 'ceph osd set noout' comes in handy.
On Jun 24, 2013, at 7:28 PM, Nigel Williams wrote:
> On 25/06/2013 5:59 AM, Brian Candler wrote:
>> On 24/06/2013 20:27, Dave Spano wrote:
>>> Here's my procedure for manually adding OSDs.
>
> The other thing I discovered is not to wait betwee
On 25/06/2013 5:59 AM, Brian Candler wrote:
On 24/06/2013 20:27, Dave Spano wrote:
Here's my procedure for manually adding OSDs.
The other thing I discovered is not to wait between steps; some changes result in a new
crushmap, that then triggers replication. You want to speed through the step
On 24/06/2013 20:27, Dave Spano wrote:
If you remove the OSD after it fails from the cluster and the
crushmap, the cluster will automatically re-assign that number to the
new OSD when you run ceph osd create with no arguments.
OK - although obviously if you're going to make a disk with a label
ost=ha1
Check to make sure it's added.
ceph osd tree
Start up the new osd, and let it sync with the cluster.
service ceph start osd.1
Dave Spano
Optogenics
- Original Message -
From: "Brian Candler"
To: "John Nielsen"
Cc: ceph-users@lists.ceph.com
Sen
On 24/06/2013 18:41, John Nielsen wrote:
The official documentation is maybe not %100 idiot-proof, but it is
step-by-step:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
If you lose a disk you want to remove the OSD associated with it. This will
trigger a data migration so you a
On Jun 24, 2013, at 11:22 AM, Brian Candler wrote:
> I'm just finding my way around the Ceph documentation. What I'm hoping to
> build are servers with 24 data disks and one O/S disk. From what I've read,
> the recommended configuration is to run 24 separate OSDs (or 23 if I have a
> separate
I'm just finding my way around the Ceph documentation. What I'm hoping
to build are servers with 24 data disks and one O/S disk. From what I've
read, the recommended configuration is to run 24 separate OSDs (or 23 if
I have a separate journal disk/SSD), and not have any sort of in-server
RAID.