> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 15 September 2015 00:09
> To: Nick Fisk ; Samuel Just
> Cc: Shinobu Kinjo ; GuangYang
> ; ceph-users
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> It
06 September 2015 15:11
>> To: 'Shinobu Kinjo' ; 'GuangYang'
>>
>> Cc: 'ceph-users' ; 'Nick Fisk'
>> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>>
>> Just a quick update after up'ing the threshol
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 06 September 2015 15:11
> To: 'Shinobu Kinjo' ; 'GuangYang'
>
> Cc: 'ceph-users' ; 'Nick Fisk'
> Subject: R
sers-boun...@lists.ceph.com] On Behalf Of
> Shinobu Kinjo
> Sent: 05 September 2015 01:42
> To: GuangYang
> Cc: ceph-users ; Nick Fisk
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> Very nice.
> You're my hero!
>
> Shinobu
>
> --
Very nice.
You're my hero!
Shinobu
- Original Message -
From: "GuangYang"
To: "Shinobu Kinjo"
Cc: "Ben Hines" , "Nick Fisk" , "ceph-users"
Sent: Saturday, September 5, 2015 9:40:06 AM
Subject
> Date: Fri, 4 Sep 2015 20:31:59 -0400
> From: ski...@redhat.com
> To: yguan...@outlook.com
> CC: bhi...@gmail.com; n...@fisk.me.uk; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
>> II
ot;Ben Hines" , "Nick Fisk"
Cc: "ceph-users"
Sent: Saturday, September 5, 2015 9:27:31 AM
Subject: Re: [ceph-users] Ceph performance, empty vs part full
IIRC, it only triggers the move (merge or split) when that folder is hit by a
request, so most likely it happens graduall
ig prod
>>> cluster. I'm in favor of bumping these two up in the defaults.
>>>
>>> Warren
>>>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Mark Nelson
>>>
if that helps to
> bring things back into order.
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Wang, Warren
>> Sent: 04 September 2015 01:21
>> To: Mark Nelson ; Ben Hines
>> Cc: ceph-users
>> Subj
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Nick Fisk
>>> Sent: 04 September 2015 13:08
>>> To: 'Wang, Warren' ; 'Mark Nelson'
>>> ; 'Ben Hines'
>>> Cc: 'ceph-users'
>>> Subject
ck Fisk
Sent: 04 September 2015 13:08
To: 'Wang, Warren' ; 'Mark Nelson'
; 'Ben Hines'
Cc: 'ceph-users'
Subject: Re: [ceph-users] Ceph performance, empty vs part full
I've just made the same change ( 4 and 40 for now) on my cluster which is a
similar
al Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 04 September 2015 13:08
> To: 'Wang, Warren' ; 'Mark Nelson'
> ; 'Ben Hines'
> Cc: 'ceph-users'
> Subject: Re: [ceph-users] Ce
s-boun...@lists.ceph.com] On Behalf Of
> Mark Nelson
> Sent: Thursday, September 03, 2015 6:04 PM
> To: Ben Hines
> Cc: ceph-users
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> Hrm, I think it will follow the merge/split rules if it's out of whack
eph-users] Ceph performance, empty vs part full
Hrm, I think it will follow the merge/split rules if it's out of whack given
the new settings, but I don't know that I've ever tested it on an existing
cluster to see that it actually happens. I guess let it sit for a while and
then c
Hrm, I think it will follow the merge/split rules if it's out of whack
given the new settings, but I don't know that I've ever tested it on an
existing cluster to see that it actually happens. I guess let it sit
for a while and then check the OSD PG directories to see if the object
counts make
Hey Mark,
I've just tweaked these filestore settings for my cluster -- after
changing this, is there a way to make ceph move existing objects
around to new filestore locations, or will this only apply to newly
created objects? (i would assume the latter..)
thanks,
-Ben
On Wed, Jul 8, 2015 at 6:
Basically for each PG, there's a directory tree where only a certain
number of objects are allowed in a given directory before it splits into
new branches/leaves. The problem is that this has a fair amount of
overhead and also there's extra associated dentry lookups to get at any
given object.
If I create a new pool it is generally fast for a short amount of time.
Not as fast as if I had a blank cluster, but close to.
Bryn
> On 8 Jul 2015, at 13:55, Gregory Farnum wrote:
>
> I think you're probably running into the internal PG/collection
> splitting here; try searching for those terms
I think you're probably running into the internal PG/collection
splitting here; try searching for those terms and seeing what your OSD
folder structures look like. You could test by creating a new pool and
seeing if it's faster or slower than the one you've already filled up.
-Greg
On Wed, Jul 8,
Hi All,
I’m perf testing a cluster again,
This time I have re-built the cluster and am filling it for testing.
on a 10 min run I get the following results from 5 load generators, each
writing though 7 iocontexts, with a queue depth of 50 async writes.
Gen1
Percentile 100 = 0.729775905609
Max
20 matches
Mail list logo