Re: [ceph-users] add multiple OSDs to cluster

2017-03-22 Thread mj
Hi Jonathan, Anthony and Steve, Thanks very much for your valuable advise and suggestions! MJ On 03/21/2017 08:53 PM, Jonathan Proulx wrote: If it took 7hr for one drive you probably already done this (or defaults are for low impact recovery) but before doing anything you want to besure you

Re: [ceph-users] add multiple OSDs to cluster

2017-03-21 Thread Anthony D'Atri
Deploying or removing OSD’s in parallel for sure can save elapsed time and avoid moving data more than once. There are certain pitfalls, though, and the strategy needs careful planning. - Deploying a new OSD at full weight means a lot of write operations. Running multiple whole-OSD backfills

Re: [ceph-users] add multiple OSDs to cluster

2017-03-21 Thread Jonathan Proulx
If it took 7hr for one drive you probably already done this (or defaults are for low impact recovery) but before doing anything you want to besure you OSD settings max backfills, max recovery active, recovery sleep (perhaps others?) are set such that revovery and backfilling doesn't overwhelm pr

Re: [ceph-users] add multiple OSDs to cluster

2017-03-21 Thread Steve Taylor
Generally speaking, you are correct. Adding more OSDs at once is more efficient than adding fewer at a time. That being said, do so carefully. We typically add OSDs to our clusters either 32 or 64 at once, and we have had issues on occasion with bad drives. It's common for us to have a drive or tw

[ceph-users] add multiple OSDs to cluster

2017-03-21 Thread mj
Hi, Just a quick question about adding OSDs, since most of the docs I can find talk about adding ONE OSD, and I'd like to add four per server on my three-node cluster. This morning I tried the careful approach, and added one OSD to server1. It all went fine, everything rebuilt and I have a H