Hello Caspar

That makes a great deal of sense, thank you for elaborating. Am I correct to 
assume that if we were to use a k=2, m=2 profile, it would be identical to a 
replicated pool (since there would be an equal amount of data and parity 
chunks)? Furthermore, how should the proper erasure profile be determined then? 
Are we to strive for a as high as possible data chunk value (k) and a low 
parity/coding value (m)?

Kind regards
Ziggy Maes
DevOps Engineer
CELL +32 478 644 354
SKYPE Ziggy.Maes
[http://www.be-mobile.com/mail/bemobile_email.png]<http://www.be-mobile.com/>
www.be-mobile.com<http://www.be-mobile.com>


From: Caspar Smit <caspars...@supernas.eu>
Date: Friday, 20 July 2018 at 14:15
To: Ziggy Maes <ziggy.m...@be-mobile.com>
Cc: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Default erasure code profile and sustaining loss of 
one host containing 4 OSDs

Ziggy,

For EC pools: min_size = k+1

So in your case (m=1) -> min_size is 3  which is the same as the number of 
shards. So if ANY shard goes down, IO is freezed.

If you choose m=2 min_size will still be 3 but you now have 4 shards (k+m = 4) 
so you can loose a shard and still remain availability.

Of course a failure domain of 'host' is required to do this but since you have 
6 hosts that would be ok.

Met vriendelijke groet,

Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend

t: (+31) 299 410 414
e: caspars...@supernas.eu<mailto:caspars...@supernas.eu>
w: www.supernas.eu<http://www.supernas.eu>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to