On 26 April 2018 at 14:58, Jonathan D. Proulx wrote:
> Those block queue scheduler tips *might* help me squeeze a bit more
> till next budget starts July 1...
Maybe you could pick up some cheap cache from this guy: https://xkcd.com/908/
--
Cheers,
~Blairo
___
On Thu, Apr 26, 2018 at 09:27:18AM +0900, Christian Balzer wrote:
:> The scrubs do impact performance which does mean I'm over capacity as
:> I should be able to scrub and not impact production, but there's still
:> a fair amount of capacity used during scrubbing that doesn't seem used
:> outside.
On Wed, Apr 25, 2018 at 10:58:43PM +, Blair Bethwaite wrote:
:Hi Jon,
:
:On 25 April 2018 at 21:20, Jonathan Proulx wrote:
:>
:> here's a snap of 24hr graph form one server (others are similar in
:> general shape):
:>
:>
https://snapshot.raintank.io/dashboard/snapshot/gB3FDPl7uRGWmL17NHNBCuWK
Hello,
On Wed, 25 Apr 2018 17:20:55 -0400 Jonathan Proulx wrote:
> On Wed Apr 25 02:24:19 PDT 2018 Christian Balzer wrote:
>
> > Hello,
>
> > On Tue, 24 Apr 2018 12:52:55 -0400 Jonathan Proulx wrote:
>
> > > The performence I really care about is over rbd for VMs in my
> > > OpenStack but
Hi Jon,
On 25 April 2018 at 21:20, Jonathan Proulx wrote:
>
> here's a snap of 24hr graph form one server (others are similar in
> general shape):
>
> https://snapshot.raintank.io/dashboard/snapshot/gB3FDPl7uRGWmL17NHNBCuWKGsXdiqlt
That's what, a median IOPs of about 80? Pretty high for spinning
On Wed Apr 25 02:24:19 PDT 2018 Christian Balzer wrote:
> Hello,
> On Tue, 24 Apr 2018 12:52:55 -0400 Jonathan Proulx wrote:
> > The performence I really care about is over rbd for VMs in my
> > OpenStack but 'rbd bench' seems to line up frety well with 'fio' test
> > inside VMs so a more or les
Hello,
On Tue, 24 Apr 2018 12:52:55 -0400 Jonathan Proulx wrote:
> Hi All,
>
> I seem to be seeing consitently poor read performance on my cluster
> relative to both write performance and read perormance of a single
> backend disk, by quite a lot.
>
> cluster is luminous with 174 7.2k SAS driv
How does your rados bench look?
Have you tried playing around with read ahead and striping?
On Tue, 24 Apr 2018 17:53 Jonathan Proulx, wrote:
> Hi All,
>
> I seem to be seeing consitently poor read performance on my cluster
> relative to both write performance and read perormance of a single
>
Hi All,
I seem to be seeing consitently poor read performance on my cluster
relative to both write performance and read perormance of a single
backend disk, by quite a lot.
cluster is luminous with 174 7.2k SAS drives across 12 storage servers
with 10G ethernet and jumbo frames. Drives are mix 4
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > MailingLists - EWS
> > Sent: 06 October 2015 18:12
> > To: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Poor Read Performance with Ubuntu 14.04 LTS
> > 3.19.0-30 Kernel
> >
>
rs [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> MailingLists - EWS
> Sent: 06 October 2015 18:12
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Poor Read Performance with Ubuntu 14.04 LTS
> 3.19.0-30 Kernel
>
> > Hi,
> >
> > Very
> Hi,
>
> Very interesting! Did you upgrade the kernel on both the OSDs and clients
or
> just some of them? I remember there were some kernel performance
> regressions a little while back. You might try running perf during your
tests
> and look for differences. Also, iperf might be worth tryin
Could you share some of your testing methodology? I'd like to repeat your
tests.
I have a cluster that is currently running mostly 3.13 kernels, but the
latest patch of that version breaks the onboard 1Gb NIC in the servers I'm
using. I recently had to redeploy several of these servers due to SSD
On 10/06/2015 10:14 AM, MailingLists - EWS wrote:
I have encountered a rather interesting issue with Ubuntu 14.04 LTS
running 3.19.0-30 kernel (Vivid) using Ceph Hammer (0.94.3).
With everything else identical in our testing cluster, no other changes
other than the kernel (apt-get install linux-
I have encountered a rather interesting issue with Ubuntu 14.04 LTS running
3.19.0-30 kernel (Vivid) using Ceph Hammer (0.94.3).
With everything else identical in our testing cluster, no other changes
other than the kernel (apt-get install linux-image-generic-lts-vivid and
then a reboot), we ar
Hi, I'm back from trip, sorry for thread pause, wanted to wrap this up.
I reread thead, but actually do not see what could be done from admin
side to tune LVM for better read performance on ceph(parts of my LVM
config included below). At least for already deployed LVM.
It seems there is no clear ag
On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> no?
Well, it's the block layer based on what DM tells it. Take a look at
dm_merge_bvec
>From dm_merge_bvec:
/*
* If the target doesn't support
On Mon, Oct 21 2013 at 2:06pm -0400,
Christoph Hellwig wrote:
> On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> > no?
>
> Well, it's the block layer based on what DM tells it. Take a look at
> dm_merge_bv
On Mon, 21 Oct 2013, Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 12:02pm -0400,
> Sage Weil wrote:
>
> > On Mon, 21 Oct 2013, Mike Snitzer wrote:
> > > On Mon, Oct 21 2013 at 10:11am -0400,
> > > Christoph Hellwig wrote:
> > >
> > > > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
On Mon, Oct 21 2013 at 12:02pm -0400,
Sage Weil wrote:
> On Mon, 21 Oct 2013, Mike Snitzer wrote:
> > On Mon, Oct 21 2013 at 10:11am -0400,
> > Christoph Hellwig wrote:
> >
> > > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > > It looks like without LVM we're getting 128KB req
On Mon, 21 Oct 2013, Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 10:11am -0400,
> Christoph Hellwig wrote:
>
> > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > It looks like without LVM we're getting 128KB requests (which IIRC is
> > > typical), but with LVM it's only 4KB. Un
On Mon, Oct 21 2013 at 10:11am -0400,
Christoph Hellwig wrote:
> On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > It looks like without LVM we're getting 128KB requests (which IIRC is
> > typical), but with LVM it's only 4KB. Unfortunately my memory is a bit
> > fuzzy here, but I
On Mon, Oct 21 2013 at 11:01am -0400,
Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 10:11am -0400,
> Christoph Hellwig wrote:
>
> > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > It looks like without LVM we're getting 128KB requests (which IIRC is
> > > typical), but with LVM
On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> It looks like without LVM we're getting 128KB requests (which IIRC is
> typical), but with LVM it's only 4KB. Unfortunately my memory is a bit
> fuzzy here, but I seem to recall a property on the request_queue or device
> that affecte
On Sun, 20 Oct 2013, Ugis wrote:
> >> output follows:
> >> #pvs -o pe_start /dev/rbd1p1
> >> 1st PE
> >> 4.00m
> >> # cat /sys/block/rbd1/queue/minimum_io_size
> >> 4194304
> >> # cat /sys/block/rbd1/queue/optimal_io_size
> >> 4194304
> >
> > Well, the parameters are being set at least. Mike
On 10/20/2013 08:18 AM, Ugis wrote:
output follows:
#pvs -o pe_start /dev/rbd1p1
1st PE
4.00m
# cat /sys/block/rbd1/queue/minimum_io_size
4194304
# cat /sys/block/rbd1/queue/optimal_io_size
4194304
Well, the parameters are being set at least. Mike, is it possible that
having minimum_io
>> output follows:
>> #pvs -o pe_start /dev/rbd1p1
>> 1st PE
>> 4.00m
>> # cat /sys/block/rbd1/queue/minimum_io_size
>> 4194304
>> # cat /sys/block/rbd1/queue/optimal_io_size
>> 4194304
>
> Well, the parameters are being set at least. Mike, is it possible that
> having minimum_io_size set to
On Fri, 18 Oct 2013, Ugis wrote:
> > Ugis, please provide the output of:
> >
> > RBD_DEVICE=
> > pvs -o pe_start $RBD_DEVICE
> > cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
> > cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
> >
> > The 'pvs' command will tell you where LVM aligned the start
> Ugis, please provide the output of:
>
> RBD_DEVICE=
> pvs -o pe_start $RBD_DEVICE
> cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
> cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
>
> The 'pvs' command will tell you where LVM aligned the start of the data
> area (which follows the LVM metadat
On Wed, Oct 16 2013 at 12:16pm -0400,
Sage Weil wrote:
> Hi,
>
> On Wed, 16 Oct 2013, Ugis wrote:
> >
> > What could make so great difference when LVM is used and what/how to
> > tune? As write performance does not differ, DM extent lookup should
> > not be lagging, where is the trick?
>
> My
On 16/10/2013 17:16, Sage Weil wrote:
I'm not sure what options LVM provides for aligning things to the
underlying storage...
There is a generic kernel ABI for exposing performance properties of
block devices to higher layers, so that they can automatically tune
themselves according to those
Hi,
On Wed, 16 Oct 2013, Ugis wrote:
> Hello ceph&LVM communities!
>
> I noticed very slow reads from xfs mount that is on ceph
> client(rbd+gpt partition+LVM PV + xfs on LE)
> To find a cause I created another rbd in the same pool, formatted it
> straight away with xfs, mounted.
>
> Write perfo
Hello ceph&LVM communities!
I noticed very slow reads from xfs mount that is on ceph
client(rbd+gpt partition+LVM PV + xfs on LE)
To find a cause I created another rbd in the same pool, formatted it
straight away with xfs, mounted.
Write performance for both xfs mounts is similar ~12MB/s
reads w
33 matches
Mail list logo