Re: [ceph-users] Ceph Not getting into a clean state

2014-05-12 Thread Mark Kirkwood
The default CRUSH placement rules want to put replica pgs on different hosts - so with pools of size 3 you need at least 3 hosts. You can get around this by editing your CRUSH rules to put replica pgs on different OSDs instead - but clearly redundancy is impacted in that case - best to have (n

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-12 Thread Georg Höllrigl
Thank you soo much! That seems to work immidetately. ATM I still see 3 pgs in active+clean+scrubbing state - but that will hopefully fix by time. So the way to go with firefly, is to either use at least 3 hosts for OSDs - or reduce the number of replicas? Kind Regards, Georg On 09.05.2014

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-09 Thread Martin B Nielsen
Hi, I experienced exactly the same with 14.04 and the 0.79 release. It was a fresh clean install with default crushmap and ceph-deploy install as pr. the quick-start guide. Oddly enough changing replica size (incl min_size) from 3 - 2 (and 2->1) and back again it worked. I didn't have time to l

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-09 Thread Mark Kirkwood
Right, I've run into the situation where the system seems reluctant to reorganise after changing all the pool sizes - until the osds are restarted (essentially I just rebooted each host in turn) *then* the health went to OK. This was a while ago (pre 0.72), so something else may be going on w

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
Hello, I've already thought about that - but even after changing the replication level (size) I'm not getting a clean cluster (there are only the default pools ATM): root@ceph-m-02:~#ceph -s cluster b04fc583-9e71-48b7-a741-92f4dff4cfef health HEALTH_WARN 232 pgs stuck unclean; recove

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Mark Kirkwood
So that's two hosts - if this is a new cluster chances are the pools have replication size=3, and won't place replica pgs on the same host... 'ceph osd dump' will let you know if this is the case. If it is ether reduce size to 2, add another host or edit your crush rules to allow replica pgs on

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
#ceph osd tree # idweight type name up/down reweight -1 76.47 root default -2 32.72 host ceph-s-01 0 7.27osd.0 up 1 1 7.27osd.1 up 1 2 9.09osd.2 up 1 3 9.09

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Craig Lewis
What does `ceph osd tree` output? On 5/8/14 07:30 , Georg Höllrigl wrote: Hello, We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now I've tried this multiple times - but the result keeps the same and shows me lots of troubles (the cluster is empty, no client has accessed

[ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
Hello, We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now I've tried this multiple times - but the result keeps the same and shows me lots of troubles (the cluster is empty, no client has accessed it) #ceph -s cluster b04fc583-9e71-48b7-a741-92f4dff4cfef health