Hit all, I successfully installed a ceph cluster firefly version  made up
of 3 osd and one monitor host.
After that I created a pool and 1 rdb  object  for kvm .
It works fine .
I verified my pool has a replica size = 3 but a I read the default should
be = 2.
Trying to shut down an osd and getting it out, ceph health displays
attive+degraded state and remains in this state until I add again one osd .
Is this a correct behaviour ?
Reading documentation I understood that cluster should repair itself going
in active clean state .
Is  possible it remains in degraded state because I have a replica size = 3
and only 2 osd ?

Sorry for my bad english.

Ignazio
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to