: April-24-15 5:03 PM
To: J David; Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Having trouble getting good performance
The ZFS recordsize does NOT equal the size of the write to disk, ZFS will write
to disk whatever size it feels is optimal. During a sequential write ZFS will
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> J David
> Sent: 24 April 2015 18:41
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On Fr
-24-15 1:41 PM
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Having trouble getting good performance
On Fri, Apr 24, 2015 at 10:58 AM, Nick Fisk wrote:
> 7.2k drives tend to do about 80 iops at 4kb IO sizes, as the IO size
> increases the number of iops will start t
The client ACKs the write as soon as it is in the journal. I suspect that
the primary OSD dispatches the write to all the secondary OSDs at the same
time so that it happens in parallel, but I am not an authority on that.
The journal writes data serially even if it comes in randomly. There is
some
On Fri, Apr 24, 2015 at 10:58 AM, Nick Fisk wrote:
> 7.2k drives tend to do about 80 iops at 4kb IO sizes, as the IO size
> increases the number of iops will start to fall. You will probably get
> around 70 iops for 128kb. But please benchmark your raw disks to get some
> accurate numbers if neede
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> J David
> Sent: 24 April 2015 15:40
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On F
On Fri, Apr 24, 2015 at 6:39 AM, Nick Fisk wrote:
> From the Fio runs, I see you are getting around 200 iops at 128kb write io
> size. I would imagine you should be getting somewhere around 200-300 iops
> for the cluster you posted in the initial post, so it looks like its
> performing about right
on
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On Thu, Apr 23, 2015 at 4:23 PM, Mark Nelson
> wrote:
> > If you want to adjust the iodepth, you'll need to use an asynchronous
> > ioengine like libaio (you al
On Thu, Apr 23, 2015 at 4:23 PM, Mark Nelson wrote:
> If you want to adjust the iodepth, you'll need to use an asynchronous
> ioengine like libaio (you also need to use direct=1)
Ah yes, libaio makes a big difference. With 1 job:
testfile: (g=0): rw=randwrite, bs=128K-128K/128K-128K/128K-128K,
> -Original Message-
> From: jdavidli...@gmail.com [mailto:jdavidli...@gmail.com] On Behalf Of J
> David
> Sent: 23 April 2015 21:22
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On T
On 04/23/2015 03:22 PM, J David wrote:
On Thu, Apr 23, 2015 at 3:05 PM, Nick Fisk wrote:
I have had a look through the fio runs, could you also try and run a couple
of jobs with iodepth=64 instead of numjobs=64. I know they should do the
same thing, but the numbers with the former are easier
On Thu, Apr 23, 2015 at 3:05 PM, Nick Fisk wrote:
> I have had a look through the fio runs, could you also try and run a couple
> of jobs with iodepth=64 instead of numjobs=64. I know they should do the
> same thing, but the numbers with the former are easier to understand.
Maybe it's an issue of
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> J David
> Sent: 23 April 2015 20:19
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On T
On Thu, Apr 23, 2015 at 3:05 PM, Nick Fisk wrote:
> If you can let us know the avg queue depth that ZFS is generating that will
> probably give a good estimation of what you can expect from the cluster.
How would that be measured?
> I have had a look through the fio runs, could you also try and
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Having trouble getting good performance
David,
With the similar 128K profile I am getting ~200MB/s bandwidth with entire OSD
on SSD..I never tested with HDDs, but, it seems you are reaching Ceph's limit
on this. Probably, nothing wrong in y
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> J David
> Sent: 23 April 2015 17:51
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> On W
gmail.com [mailto:jdavidli...@gmail.com] On Behalf Of J David
Sent: Thursday, April 23, 2015 9:56 AM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Having trouble getting good performance
On Wed, Apr 22, 2015 at 4:07 PM, Somnath Roy wrote:
> I am suggesting synthetic workl
On Wed, Apr 22, 2015 at 4:07 PM, Somnath Roy wrote:
> I am suggesting synthetic workload like fio to run on top of VM to identify
> where the bottleneck is. For example, if fio is giving decent enough output,
> I guess ceph layer is doing fine. It is your client that is not driving
> enough.
A
On Wed, Apr 22, 2015 at 4:30 PM, Nick Fisk wrote:
> I suspect you are hitting problems with sync writes, which Ceph isn't known
> for being the fastest thing for.
There's "not being the fastest thing" and "an expensive cluster of
hardware that performs worse than a single SATA drive." :-(
> I'm
ers [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Somnath Roy
> > Sent: 22 April 2015 21:08
> > To: J David
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Having trouble getting good performance
> >
> > So, it seems you are n
gt; Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Having trouble getting good performance
>
> So, it seems you are not limited by anything..
>
> I am suggesting synthetic workload like fio to run on top of VM to
identify
> where the bottleneck is. For example, if
& Regards
Somnath
-Original Message-
From: jdavidli...@gmail.com [mailto:jdavidli...@gmail.com] On Behalf Of J David
Sent: Wednesday, April 22, 2015 12:14 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Having trouble getting good performance
On Wed, Apr 22, 2015
On Wed, Apr 22, 2015 at 2:54 PM, Somnath Roy wrote:
> What ceph version are you using ?
Firefly, 0.80.9.
> Could you try with rbd_cache=false or true and see if behavior changes ?
As this is ZFS, running a cache layer below it that it is not aware of
violates data integrity and can cause corrup
What ceph version are you using ?
It seems clients are not sending enough traffic to the cluster.
Could you try with rbd_cache=false or true and see if behavior changes ?
What is the client side cpu util ?
Performance also depends on the QD you are driving with.
I would suggest, run fio on top of V
24 matches
Mail list logo