Re: [ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-10 Thread bruno.canning
users] Adding storage to exiting clusters with minimal impact Here's my possibly unique method... I had 3 nodes with 12 disks each, and when adding 2 more nodes, I had issues with the common method you describe, totally blocking clients for minutes, but this worked great for me: > my own metho

Re: [ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-06 Thread Peter Maloney
Here's my possibly unique method... I had 3 nodes with 12 disks each, and when adding 2 more nodes, I had issues with the common method you describe, totally blocking clients for minutes, but this worked great for me: > my own method > - osd max backfills = 1 and osd recovery max active = 1 > - cr

Re: [ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-06 Thread Brian Andrus
On Thu, Jul 6, 2017 at 9:18 AM, Gregory Farnum wrote: > On Thu, Jul 6, 2017 at 7:04 AM wrote: > >> Hi Ceph Users, >> >> >> >> We plan to add 20 storage nodes to our existing cluster of 40 nodes, each >> node has 36 x 5.458 TiB drives. We plan to add the storage such that all >> new OSDs are prep

Re: [ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-06 Thread Gregory Farnum
On Thu, Jul 6, 2017 at 7:04 AM wrote: > Hi Ceph Users, > > > > We plan to add 20 storage nodes to our existing cluster of 40 nodes, each > node has 36 x 5.458 TiB drives. We plan to add the storage such that all > new OSDs are prepared, activated and ready to take data but not until we > start sl

[ceph-users] Adding storage to exiting clusters with minimal impact

2017-07-06 Thread bruno.canning
Hi Ceph Users, We plan to add 20 storage nodes to our existing cluster of 40 nodes, each node has 36 x 5.458 TiB drives. We plan to add the storage such that all new OSDs are prepared, activated and ready to take data but not until we start slowly increasing their weightings. We also expect thi