I would like to know more about those corner cases and why it’s not recommended to use this approach. Because our customers and we ourselves have been using such profiles for years, including multiple occasions when one of two DCs failed with k7m11. They were quite happy with the resiliency Ceph provided.

Zitat von Anthony D'Atri <a...@dreamsnake.net>:

Just repeating what I read.  I suspect that the effect is minimal.

Back when I did ZFS a lot there was conventional wisdom of a given party group not having more than 9 drives, to keep rebuild and writes semi-manageable.

A few years back someone asserted that EC values with small prime factors are advantageous, so 23,11 would be doubleplus ungood.

I thought it was that K should preferably be a power of two, M as many
as your security demands require.
Also pools should have power-of-two PGs, and bucket shards would be primes.

I could be wrong though.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to