On Friday, May 16, 2014, Ignazio Cassano <ignaziocass...@gmail.com> wrote:

> Hit all, I successfully installed a ceph cluster firefly version  made up
> of 3 osd and one monitor host.
> After that I created a pool and 1 rdb  object  for kvm .
> It works fine .
> I verified my pool has a replica size = 3 but a I read the default should
> be = 2.
> Trying to shut down an osd and getting it out, ceph health displays
> attive+degraded state and remains in this state until I add again one osd .
> Is this a correct behaviour ?
> Reading documentation I understood that cluster should repair itself going
> in active clean state .
> Is  possible it remains in degraded state because I have a replica size =
> 3 and only 2 osd ?
>
Yep, that's it. You can change the size to 2, if that's really all the
number of copies you need:
ceph osd pool set <foo> size 2
Iirc.
-Greg




> Sorry for my bad english.
>
> Ignazio
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to