[ceph-users] Re: Ceph EC K+M

2022-02-21 Thread Eugen Block
The customer's requirement was to sustain the loss of one of two datacenters and two additional hosts. The crush failure domain is "host". There are 10 hosts in each DC, so we put 9 chunks in each DC to be able to recover completely if one host fails. This worked quite nicely already, they

[ceph-users] Re: Ceph EC K+M

2022-02-21 Thread Eugen Block
Hi, it really depends on the resiliency requirements and the use case. We have a couple of customers with EC profiles like k=7 m=11. The potential waste of space as Anthony already mentions has to be considered, of course. But with regards to performance we haven't heard any complaints ye

[ceph-users] Re: Ceph EC K+M

2022-02-18 Thread Anthony D'Atri
A couple of years ago someone suggested on the list wrote: >> 3) k should only have small prime factors, power of 2 if possible >> >> I tested k=5,6,8,10,12. Best results in decreasing order: k=8, k=6. All >> other choices were poor. The value of m seems not relevant for performance. >> Larger k