Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-22 Thread J David
On Wed, Apr 22, 2015 at 2:16 PM, Gregory Farnum wrote: > Uh, looks like it's the contents of the "omap" directory (inside of > "current") are the levelDB store. :) OK, here's du -sk of all of those: 36740 ceph-0/current/omap 35736 ceph-1/current/omap 37356 ceph-2/current/omap 38096 ceph-3/curren

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-22 Thread Gregory Farnum
On Wed, Apr 22, 2015 at 11:04 AM, J David wrote: > On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote: >> Since I now realize you did a bunch of reweighting to try and make >> data match up I don't think you'll find something like badly-sized >> LevelDB instances, though. > > It's certainly so

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-22 Thread J David
On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote: > Since I now realize you did a bunch of reweighting to try and make > data match up I don't think you'll find something like badly-sized > LevelDB instances, though. It's certainly something I can check, just to be sure. Erm, what does a Le

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-16 Thread Gregory Farnum
On Sat, Apr 11, 2015 at 12:11 PM, J David wrote: > On Thu, Apr 9, 2015 at 7:20 PM, Gregory Farnum wrote: >> Okay, but 118/85 = 1.38. You say you're seeing variance from 53% >> utilization to 96%, and 53%*1.38 = 73.5%, which is *way* off your >> numbers. > > 53% to 96% is with all weights set to d

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-11 Thread J David
On Thu, Apr 9, 2015 at 7:20 PM, Gregory Farnum wrote: > Okay, but 118/85 = 1.38. You say you're seeing variance from 53% > utilization to 96%, and 53%*1.38 = 73.5%, which is *way* off your > numbers. 53% to 96% is with all weights set to default (i.e. disk size) and all reweights set to 1. (I.e.

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-08 Thread J David
On Wed, Apr 8, 2015 at 11:40 AM, Gregory Farnum wrote: > "ceph pg dump" will output the size of each pg, among other things. Among many other things. :) Here is the raw output, in case I'm misinterpreting it: http://pastebin.com/j4ySNBdQ It *looks* like the pg's are roughly uniform in size. T

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-08 Thread Gregory Farnum
"ceph pg dump" will output the size of each pg, among other things. On Wed, Apr 8, 2015 at 8:34 AM J David wrote: > On Wed, Apr 8, 2015 at 11:33 AM, Gregory Farnum wrote: > > Is this a problem with your PGs being placed unevenly, with your PGs > being > > sized very differently, or both? > > Ple

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-08 Thread J David
On Wed, Apr 8, 2015 at 11:33 AM, Gregory Farnum wrote: > Is this a problem with your PGs being placed unevenly, with your PGs being > sized very differently, or both? Please forgive the silly question, but how would one check that? Thanks! ___ ceph-use

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-08 Thread Gregory Farnum
Is this a problem with your PGs being placed unevenly, with your PGs being sized very differently, or both? CRUSH is never going to balance perfectly, but the numbers you're quoting look a bit worse than usual at first glance. -Greg On Tue, Apr 7, 2015 at 8:16 PM J David wrote: > Getting placeme

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-07 Thread David Clarke
On 08/04/15 15:16, J David wrote: > Getting placement groups to be placed evenly continues to be a major > challenge for us, bordering on impossible. > > When we first reported trouble with this, the ceph cluster had 12 > OSD's (each Intel DC S3700 400GB) spread across three nodes. Since > then,

[ceph-users] Getting placement groups to place evenly (again)

2015-04-07 Thread J David
Getting placement groups to be placed evenly continues to be a major challenge for us, bordering on impossible. When we first reported trouble with this, the ceph cluster had 12 OSD's (each Intel DC S3700 400GB) spread across three nodes. Since then, it has grown to 8 nodes with 38 OSD's. The av