> 
> Maybe the weakest thing in that configuration is having 2 OSDs per node; osd 
> nearfull must be tuned accordingly so that no OSD goes beyond about 0.45, so 
> that in case of failure of one disk, the other OSD in the node has enough 
> space for healing replication.
> 

A careful setting of mon_osd_down_out_subtree_limit can help in the situation 
of losing a whole node, though as you and others have noted, this topology has 
other challenges.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to