Hi David,
Thank you for your response.
Failure domain for ec profile is 'host'. So I guess it is okay to add a
node and activate 5 disks at a time ?
$ ceph osd erasure-code-profile get profile5by3
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=fals
I like to avoid adding disks from more than 1 failure domain at a time in
case some of the new disks are bad. In your example of only adding 1 new
node, I would say that adding all of the disks at the same time is the
better way to do it.
Adding only 1 disk in the new node at a time would actually
Hi,
We have a live cluster with 8 OSD nodes all having 5-6 disks each.
We would like to add a new host and expand the cluster.
We have 4 pools
- 3 replicated pools with replication factor 5 and 3
- 1 erasure coded pool with k=5, m=3
So my concern is, is there any precautions that are needed to