Re: [ceph-users] PGs issue

2015-03-20 Thread Robert LeBlanc
I like this idea. I was under the impression that udev did not call the init script, but ceph-disk directly. I don't see ceph-disk calling create-or-move, but I know it does because I see it in the ceph -w when I boot up OSDs. /lib/udev/rules.d/95-ceph-osd.rules # activate ceph-tagged partitions A

Re: [ceph-users] PGs issue

2015-03-20 Thread Craig Lewis
This seems to be a fairly consistent problem for new users. The create-or-move is adjusting the crush weight, not the osd weight. Perhaps the init script should set the defaultweight to 0.01 if it's <= 0? It seems like there's a downside to this, but I don't see it. On Fri, Mar 20, 2015 at 1

Re: [ceph-users] PGs issue

2015-03-20 Thread Robert LeBlanc
The weight can be based on anything, size, speed, capability, some random value, etc. The important thing is that it makes sense to you and that you are consistent. Ceph by default (ceph-disk and I believe ceph-deploy) take the approach of using size. So if you use a different weighting scheme, yo

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Thank you for the clarifications, Sahana! I haven't got to that part, yet, so these details were (yet) unknown to me. Perhaps some information on the PGs weight should be provided in the 'quick deployment' page, as this issue might be encountered in the future by other users, as well. Kind regard

Re: [ceph-users] PGs issue

2015-03-20 Thread Sahana
Hi Bogdan, Here is the link for hardware recccomendations : http://ceph.com/docs/master/start/hardware-recommendations/#hard-disk-drives. As per this link, minimum size reccommended for osds is 1TB. Butt as Nick said, Ceph OSDs must be min. 10GB to get an weight of 0.01 Here is the snippet fr

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the status has changed to '256 active+clean'. Is this information clearly stated in the documentation, and I have missed it? In case it isn't - I think it would be recommended to add it, as the issue might be encountered by other

Re: [ceph-users] PGs issue

2015-03-20 Thread Nick Fisk
I see the Problem, as your OSD's are only 8GB they have a zero weight, I think the minimum size you can get away with is 10GB in Ceph as the size is measured in TB and only has 2 decimal places. For a work around try running :- ceph osd crush reweight osd.X 1 for each osd, this will rewei

Re: [ceph-users] PGs issue

2015-03-20 Thread Bogdan SOLGA
es, but the result is the same. >> >> Thank you for your help! >> >> Regards, >> Bogdan >> >> >> On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk wrote: >> >>> >>> >>> >>> >>> > -Original M

Re: [ceph-users] PGs issue

2015-03-20 Thread Sahana
; Thank you for your help! > > Regards, > Bogdan > > > On Thu, Mar 19, 2015 at 11:29 PM, Nick Fisk wrote: > >> >> >> >> >> > -Original Message- >> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf >> Of >

Re: [ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
rs-boun...@lists.ceph.com] On Behalf Of > > Bogdan SOLGA > > Sent: 19 March 2015 20:51 > > To: ceph-users@lists.ceph.com > > Subject: [ceph-users] PGs issue > > > > Hello, everyone! > > I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick &g

Re: [ceph-users] PGs issue

2015-03-19 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Bogdan SOLGA > Sent: 19 March 2015 20:51 > To: ceph-users@lists.ceph.com > Subject: [ceph-users] PGs issue > > Hello, everyone! > I have created a Ceph cluster

[ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
Hello, everyone! I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick deploy ' page, with the following setup: - 1 x admin / deploy node; - 3 x OSD and MON nodes; - each OSD node has 2 x 8 GB HDDs; The set