users] Adding storage to exiting clusters with minimal impact
Here's my possibly unique method... I had 3 nodes with 12 disks each,
and when adding 2 more nodes, I had issues with the common method you
describe, totally blocking clients for minutes, but this worked great
for me:
> my own metho
Here's my possibly unique method... I had 3 nodes with 12 disks each,
and when adding 2 more nodes, I had issues with the common method you
describe, totally blocking clients for minutes, but this worked great
for me:
> my own method
> - osd max backfills = 1 and osd recovery max active = 1
> - cr
On Thu, Jul 6, 2017 at 9:18 AM, Gregory Farnum wrote:
> On Thu, Jul 6, 2017 at 7:04 AM wrote:
>
>> Hi Ceph Users,
>>
>>
>>
>> We plan to add 20 storage nodes to our existing cluster of 40 nodes, each
>> node has 36 x 5.458 TiB drives. We plan to add the storage such that all
>> new OSDs are prep
On Thu, Jul 6, 2017 at 7:04 AM wrote:
> Hi Ceph Users,
>
>
>
> We plan to add 20 storage nodes to our existing cluster of 40 nodes, each
> node has 36 x 5.458 TiB drives. We plan to add the storage such that all
> new OSDs are prepared, activated and ready to take data but not until we
> start sl
Hi Ceph Users,
We plan to add 20 storage nodes to our existing cluster of 40 nodes, each node
has 36 x 5.458 TiB drives. We plan to add the storage such that all new OSDs
are prepared, activated and ready to take data but not until we start slowly
increasing their weightings. We also expect thi