> So, I need to know what will be data safely level with the above set-up (
> i.e.  6 OSDs with  4X2 EC  ). How many OSDs ( disks ) and nodes failure ,
> above set-up can withstand.

With EC N+2 you can lose one drive or host, and the cluster will go on
with degraded mode until it has been able to recreate the missing data
on another OSD, if you lose two drives or hosts, I believe the EC pool
with go readonly, again until it has rebuilt copies elsewhere.

Still, if you have EC 4+2 and only 6 OSD hosts, this means if a host
dies, the cluster can not recreate data anywhere without violating
"one copy per host" default placement, so the cluster will be degraded
until this host comes back or another one replaces it. For a N+M EC
cluster, I would suggest having N+M+1 or even +2 number of hosts, so
that you can do maintenance on a host or lose a host and still be able
to recover without visiting the server room.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to