Hi David,

Thank you for your response.

Failure domain for ec profile is 'host'.  So I guess it is okay to add a
node and activate 5 disks at a time ?

$ ceph osd erasure-code-profile get profile5by3
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=5
m=3
plugin=jerasure
technique=reed_sol_van
w=8


Karun Josy

On Sun, Dec 17, 2017 at 11:26 PM, David Turner <drakonst...@gmail.com>
wrote:

> I like to avoid adding disks from more than 1 failure domain at a time in
> case some of the new disks are bad. In your example of only adding 1 new
> node, I would say that adding all of the disks at the same time is the
> better way to do it.
>
> Adding only 1 disk in the new node at a time would actually be worse for
> the balance of the cluster as it would only have 1 disk while the rest have
> all 5 or more.
>
> The EC profile shouldn't play into account as you already have enough
> hosts to fulfill it.
>
> On Sun, Dec 17, 2017, 11:57 AM Karun Josy <karunjo...@gmail.com> wrote:
>
>> Hi,
>>
>> We have a live cluster with 8 OSD nodes all having 5-6 disks each.
>>
>> We would like to add a new host and expand the cluster.
>>
>> We have 4 pools
>> - 3 replicated pools with replication factor 5 and 3
>> - 1 erasure coded pool with k=5, m=3
>>
>> So my concern is, is there any precautions that are needed to add the new
>> host since the ec profile is 5+3.
>>
>> And can we add multiple disks at the same time in the new host ? Or
>> should it be 1 at a time ?
>>
>>
>>
>> Karun
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to