Hello @all,

givin the following config:

     * ceph.conf:

       ...
       mon osd down out subtree limit = host
       osd_pool_default_size = 3
       osd_pool_default_min_size = 2
       ...

     * each OSD has its journal on a 30GB partition on a PCIe-Flash-Card
     * 3 hosts

What would happen if one host goes down? I mean is there a limit of downtime of this host/osds? How is Ceph detecting the differences between OSDs within a placement group? Is there a binary log(which could run out of space) in the journal/monitor or will it just copy all object within that pgs which had unavailable osds?

Thanks in advance,
Dennis
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to