Re: [ceph-users] Ceph performance, empty vs part full

2015-09-15 Thread Nick Fisk
> -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 15 September 2015 00:09 > To: Nick Fisk ; Samuel Just > Cc: Shinobu Kinjo ; GuangYang > ; ceph-users > Subject: Re: [ceph-users] Ceph performance, empty vs part full > > It&#x

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-14 Thread Gregory Farnum
06 September 2015 15:11 >> To: 'Shinobu Kinjo' ; 'GuangYang' >> >> Cc: 'ceph-users' ; 'Nick Fisk' >> Subject: Re: [ceph-users] Ceph performance, empty vs part full >> >> Just a quick update after up'ing the threshol

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-08 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Nick Fisk > Sent: 06 September 2015 15:11 > To: 'Shinobu Kinjo' ; 'GuangYang' > > Cc: 'ceph-users' ; 'Nick Fisk' > Subject: R

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-06 Thread Nick Fisk
sers-boun...@lists.ceph.com] On Behalf Of > Shinobu Kinjo > Sent: 05 September 2015 01:42 > To: GuangYang > Cc: ceph-users ; Nick Fisk > Subject: Re: [ceph-users] Ceph performance, empty vs part full > > Very nice. > You're my hero! > > Shinobu > > --

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Shinobu Kinjo
Very nice. You're my hero! Shinobu - Original Message - From: "GuangYang" To: "Shinobu Kinjo" Cc: "Ben Hines" , "Nick Fisk" , "ceph-users" Sent: Saturday, September 5, 2015 9:40:06 AM Subject

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread GuangYang
> Date: Fri, 4 Sep 2015 20:31:59 -0400 > From: ski...@redhat.com > To: yguan...@outlook.com > CC: bhi...@gmail.com; n...@fisk.me.uk; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph performance, empty vs part full > >> II

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Shinobu Kinjo
ot;Ben Hines" , "Nick Fisk" Cc: "ceph-users" Sent: Saturday, September 5, 2015 9:27:31 AM Subject: Re: [ceph-users] Ceph performance, empty vs part full IIRC, it only triggers the move (merge or split) when that folder is hit by a request, so most likely it happens graduall

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread GuangYang
ig prod >>> cluster. I'm in favor of bumping these two up in the defaults. >>> >>> Warren >>> >>> -Original Message- >>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >>> Mark Nelson >>>

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Ben Hines
if that helps to > bring things back into order. > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Wang, Warren >> Sent: 04 September 2015 01:21 >> To: Mark Nelson ; Ben Hines >> Cc: ceph-users >> Subj

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Jan Schermer
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >>> Nick Fisk >>> Sent: 04 September 2015 13:08 >>> To: 'Wang, Warren' ; 'Mark Nelson' >>> ; 'Ben Hines' >>> Cc: 'ceph-users' >>> Subject

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Mark Nelson
ck Fisk Sent: 04 September 2015 13:08 To: 'Wang, Warren' ; 'Mark Nelson' ; 'Ben Hines' Cc: 'ceph-users' Subject: Re: [ceph-users] Ceph performance, empty vs part full I've just made the same change ( 4 and 40 for now) on my cluster which is a similar

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Nick Fisk
al Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Nick Fisk > Sent: 04 September 2015 13:08 > To: 'Wang, Warren' ; 'Mark Nelson' > ; 'Ben Hines' > Cc: 'ceph-users' > Subject: Re: [ceph-users] Ce

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-04 Thread Nick Fisk
s-boun...@lists.ceph.com] On Behalf Of > Mark Nelson > Sent: Thursday, September 03, 2015 6:04 PM > To: Ben Hines > Cc: ceph-users > Subject: Re: [ceph-users] Ceph performance, empty vs part full > > Hrm, I think it will follow the merge/split rules if it's out of whack

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-03 Thread Wang, Warren
eph-users] Ceph performance, empty vs part full Hrm, I think it will follow the merge/split rules if it's out of whack given the new settings, but I don't know that I've ever tested it on an existing cluster to see that it actually happens. I guess let it sit for a while and then c

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-03 Thread Mark Nelson
Hrm, I think it will follow the merge/split rules if it's out of whack given the new settings, but I don't know that I've ever tested it on an existing cluster to see that it actually happens. I guess let it sit for a while and then check the OSD PG directories to see if the object counts make

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-03 Thread Ben Hines
Hey Mark, I've just tweaked these filestore settings for my cluster -- after changing this, is there a way to make ceph move existing objects around to new filestore locations, or will this only apply to newly created objects? (i would assume the latter..) thanks, -Ben On Wed, Jul 8, 2015 at 6:

Re: [ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread Mark Nelson
Basically for each PG, there's a directory tree where only a certain number of objects are allowed in a given directory before it splits into new branches/leaves. The problem is that this has a fair amount of overhead and also there's extra associated dentry lookups to get at any given object.

Re: [ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread MATHIAS, Bryn (Bryn)
If I create a new pool it is generally fast for a short amount of time. Not as fast as if I had a blank cluster, but close to. Bryn > On 8 Jul 2015, at 13:55, Gregory Farnum wrote: > > I think you're probably running into the internal PG/collection > splitting here; try searching for those terms

Re: [ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread Gregory Farnum
I think you're probably running into the internal PG/collection splitting here; try searching for those terms and seeing what your OSD folder structures look like. You could test by creating a new pool and seeing if it's faster or slower than the one you've already filled up. -Greg On Wed, Jul 8,

[ceph-users] Ceph performance, empty vs part full

2015-07-08 Thread MATHIAS, Bryn (Bryn)
Hi All, I’m perf testing a cluster again, This time I have re-built the cluster and am filling it for testing. on a 10 min run I get the following results from 5 load generators, each writing though 7 iocontexts, with a queue depth of 50 async writes. Gen1 Percentile 100 = 0.729775905609 Max