Hi!
> 2. One node (with 8 osds) goes offline. Will ceph automatically replicate all
> objects on the remaining node to maintain number of replicas = 2?
> No, because it can no longer satisfy your CRUSH rules. Your crush rule states
> 1x copy pr. node and it will keep it that way. The cluster wil
Hi Pavel,
Will try and answer some of your questions:
My first question will be about monitor data directory. How much space I
> need to reserve for it? Can monitor-fs be corrupted if monitor goes out of
> storage space?
>
We have about 20GB partitions for monitors - they really don't use much
s
Hi!
My first question will be about monitor data directory. How much space I need
to reserve for it? Can monitor-fs be corrupted if monitor goes out of storage
space?
I also have questions about ceph auto-recovery process.
For example, I have two nodes with 8 drives on each, each drive is pres