Hi Tim,
With the current setup you can only handle 1 host failure without loosing
any data, BUT everything will probably freeze until you bring the failed
node (or the OSD"s in it) back up.
Your setup indicates k=6, m=2 and all 8 shards are distributed to 4 hosts
(2 shards/osds per host). Be awar
Hey all,
We are trying to get an erasure coding cluster up and running but we are having
a problem getting the cluster to remain up if we lose an OSD host.
Currently we have 6 OSD hosts with 6 OSDs a piece. I'm trying to build an EC
profile and a crush rule that will allow the cluster to con