On Wed, Apr 22, 2015 at 2:16 PM, Gregory Farnum wrote:
> Uh, looks like it's the contents of the "omap" directory (inside of
> "current") are the levelDB store. :)
OK, here's du -sk of all of those:
36740 ceph-0/current/omap
35736 ceph-1/current/omap
37356 ceph-2/current/omap
38096 ceph-3/curren
On Wed, Apr 22, 2015 at 11:04 AM, J David wrote:
> On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote:
>> Since I now realize you did a bunch of reweighting to try and make
>> data match up I don't think you'll find something like badly-sized
>> LevelDB instances, though.
>
> It's certainly so
On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote:
> Since I now realize you did a bunch of reweighting to try and make
> data match up I don't think you'll find something like badly-sized
> LevelDB instances, though.
It's certainly something I can check, just to be sure. Erm, what does
a Le
On Sat, Apr 11, 2015 at 12:11 PM, J David wrote:
> On Thu, Apr 9, 2015 at 7:20 PM, Gregory Farnum wrote:
>> Okay, but 118/85 = 1.38. You say you're seeing variance from 53%
>> utilization to 96%, and 53%*1.38 = 73.5%, which is *way* off your
>> numbers.
>
> 53% to 96% is with all weights set to d
On Thu, Apr 9, 2015 at 7:20 PM, Gregory Farnum wrote:
> Okay, but 118/85 = 1.38. You say you're seeing variance from 53%
> utilization to 96%, and 53%*1.38 = 73.5%, which is *way* off your
> numbers.
53% to 96% is with all weights set to default (i.e. disk size) and all
reweights set to 1. (I.e.
On Wed, Apr 8, 2015 at 11:40 AM, Gregory Farnum wrote:
> "ceph pg dump" will output the size of each pg, among other things.
Among many other things. :)
Here is the raw output, in case I'm misinterpreting it:
http://pastebin.com/j4ySNBdQ
It *looks* like the pg's are roughly uniform in size. T
"ceph pg dump" will output the size of each pg, among other things.
On Wed, Apr 8, 2015 at 8:34 AM J David wrote:
> On Wed, Apr 8, 2015 at 11:33 AM, Gregory Farnum wrote:
> > Is this a problem with your PGs being placed unevenly, with your PGs
> being
> > sized very differently, or both?
>
> Ple
On Wed, Apr 8, 2015 at 11:33 AM, Gregory Farnum wrote:
> Is this a problem with your PGs being placed unevenly, with your PGs being
> sized very differently, or both?
Please forgive the silly question, but how would one check that?
Thanks!
___
ceph-use
Is this a problem with your PGs being placed unevenly, with your PGs being
sized very differently, or both?
CRUSH is never going to balance perfectly, but the numbers you're quoting
look a bit worse than usual at first glance.
-Greg
On Tue, Apr 7, 2015 at 8:16 PM J David wrote:
> Getting placeme
On 08/04/15 15:16, J David wrote:
> Getting placement groups to be placed evenly continues to be a major
> challenge for us, bordering on impossible.
>
> When we first reported trouble with this, the ceph cluster had 12
> OSD's (each Intel DC S3700 400GB) spread across three nodes. Since
> then,
Getting placement groups to be placed evenly continues to be a major
challenge for us, bordering on impossible.
When we first reported trouble with this, the ceph cluster had 12
OSD's (each Intel DC S3700 400GB) spread across three nodes. Since
then, it has grown to 8 nodes with 38 OSD's.
The av
11 matches
Mail list logo