Hi!

My first question will be about monitor data directory. How much space I need 
to reserve for it? Can monitor-fs be corrupted if monitor goes out of storage 
space? 

I also have questions about ceph auto-recovery process.
For example, I have two nodes with 8 drives on each, each drive is presented as 
separate osd. The number of replicas = 2. I have wrote a crush ruleset, which 
picks two nodes and one osd on each to store replicas. Which will happens on 
following scenarios:

1. One drive in one node failed. Will ceph automatically re-replicate affected 
objects? Where replicas will be stored?

1.1 The failed osd will appears online again with all of it's data. How ceph 
cluster will deal with it?

2. One node (with 8 osds) goes offline. Will ceph automatically replicate all 
objects on the remaining node to maintain number of replicas = 2?

2.1 The failed node goes online again with all data. How ceph cluster will deal 
with it?

Thanks in advance,
  Pavel.



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to