Re: [ceph-users] osd out

2015-08-12 Thread chmind
Yeah. You are right. Thank you. > On Aug 12, 2015, at 19:53, GuangYang wrote: > > If you are using the default configuration to create the pool (3 replicas), > after losing 1 OSD and having 2 left, CRUSH would not be able to find enough > OSDs (at least 3) to map the PG thus it would stuck at

[ceph-users] osd out

2015-08-12 Thread chmind
Hello. Could you please help me to remove osd from cluster; # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.02998 root default -2 0.00999 host ceph1 0 0.00999 osd.0 up 1.0 1.0 -3 0.00999 host ceph2 1 0.00999 osd.1