Re: [ceph-users] deep-scrubbing

2017-04-03 Thread M Ranga Swami Reddy
Thanks Sage. We have older version Ceph (Firefly version), where I could not see this behavior (is on Saturday, after enabling the deep-scrub @ 7AM hours, I could not see the deep-scrub started for Friday's PG deep-scrub)). Is the deep-scrub's performed based on the time-stamp? For ex: a PG deep-sc

Re: [ceph-users] deep-scrubbing

2017-04-03 Thread Sage Weil
On Mon, 3 Apr 2017, M Ranga Swami Reddy wrote: > + ceph-devel > > On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy > wrote: > > Hello, > > I use a ceph cluster and its show the deeps scrub's PG distribution as below > > from "ceph pg dump" command: > > > > > >2000 Friday > >100

Re: [ceph-users] deep-scrubbing

2017-04-03 Thread M Ranga Swami Reddy
+ ceph-devel On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy wrote: > Hello, > I use a ceph cluster and its show the deeps scrub's PG distribution as below > from "ceph pg dump" command: > > >2000 Friday >1000 Saturday >4000 Sunday > == > > On Friday, I have disabled the

Re: [ceph-users] deep-scrubbing

2017-04-03 Thread M Ranga Swami Reddy
I use a ceph cluster and its show the deeps scrub's PG distribution as below from "ceph pg dump" command: 2000 Friday 1000 Saturday 4000 Sunday == On Friday, I have disabled the deep-scrub due to some reason. If this case, all Friday's PG deep-scrub will be performed on Saturday

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-23 Thread Nick Fisk
gt; To: 'Robert LeBlanc' ; 'Eugen Block' > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] deep-scrubbing has large impact on performance > > Thanks for the tip Robert, much appreciated. > > > -Original Message- > > From: Robert LeBla

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-23 Thread Nick Fisk
Thanks for the tip Robert, much appreciated. > -Original Message- > From: Robert LeBlanc [mailto:rob...@leblancnet.us] > Sent: 23 November 2016 00:54 > To: Eugen Block > Cc: Nick Fisk ; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] deep-scrubbing has large imp

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-22 Thread Robert LeBlanc
at 5:34 AM, Eugen Block wrote: > Thank you! > > > Zitat von Nick Fisk : > >>> -Original Message- >>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >>> Eugen Block >>> Sent: 22 November 2016 10:11 >>> To: Nic

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-22 Thread Eugen Block
Thank you! Zitat von Nick Fisk : -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eugen Block Sent: 22 November 2016 10:11 To: Nick Fisk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] deep-scrubbing has large impact on

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-22 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Eugen Block > Sent: 22 November 2016 10:11 > To: Nick Fisk > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] deep-scrubbing has large impact on performance >

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-22 Thread Eugen Block
Thanks for the very quick answer! If you are using Jewel We are still using Hammer (0.94.7), we wanted to upgrade to Jewel in a couple of weeks, would you recommend to do it now? Zitat von Nick Fisk : -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]

Re: [ceph-users] deep-scrubbing has large impact on performance

2016-11-22 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Eugen Block > Sent: 22 November 2016 09:55 > To: ceph-users@lists.ceph.com > Subject: [ceph-users] deep-scrubbing has large impact on performance > > Hi list, > > I've been searching the mai

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-11-08 Thread Stefan Priebe - Profihost AG
Am 08.11.2016 um 10:17 schrieb Kees Meijs: > Hi, > > As promised, our findings so far: > > * For the time being, the new scrubbing parameters work well. Which parameters do you refer to? Currently we're on hammer. > * Using CFQ for spinners and NOOP voor SSD seems to spread load over >

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-11-08 Thread Kees Meijs
Hi, As promised, our findings so far: * For the time being, the new scrubbing parameters work well. * Using CFQ for spinners and NOOP voor SSD seems to spread load over the storage cluster a little better than deadline does. However, overall latency seems (just a feeling, no numbers t

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-31 Thread rick stehno
I would suggest deadline for any ssd, nvme ssd, or pcie flash card. But, you will need to supply the deadline settings too or deadline won't be any different than running with noop Rick Sent from my iPhone, please excuse any typing errors. > On Oct 31, 2016, at 5:01 AM, Wido den Hollander wr

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-31 Thread Wido den Hollander
> Op 28 oktober 2016 om 15:37 schreef Kees Meijs : > > > Hi, > > Interesting... We're now running using deadline. In other posts I read > about noop for SSDs instead of CFQ. > > Since we're using spinners with SSD journals; does it make since to mix > the scheduler? E.g. CFG for spinners _and_

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-28 Thread Kees Meijs
Hi, Interesting... We're now running using deadline. In other posts I read about noop for SSDs instead of CFQ. Since we're using spinners with SSD journals; does it make since to mix the scheduler? E.g. CFG for spinners _and_ noop for SSD? K. On 28-10-16 14:43, Wido den Hollander wrote: > Make

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-28 Thread Wido den Hollander
> Op 28 oktober 2016 om 13:18 schreef Kees Meijs : > > > Hi, > > On 28-10-16 12:06, w...@42on.com wrote: > > I don't like this personally. Your cluster should be capable of doing > > a deep scrub at any moment. If not it will also not be able to handle > > a node failure during peak times. > >

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-28 Thread Kees Meijs
Hi, On 28-10-16 12:06, w...@42on.com wrote: > I don't like this personally. Your cluster should be capable of doing > a deep scrub at any moment. If not it will also not be able to handle > a node failure during peak times. Valid point and I totally agree. Unfortunately, the current load doesn't

Re: [ceph-users] Deep scrubbing causes severe I/O stalling

2016-10-28 Thread w...@42on.com
> Op 28 okt. 2016 om 11:52 heeft Kees Meijs het volgende > geschreven: > > Hi Cephers, > > Using Ceph 0.94.9-1trusty we noticed severe I/O stalling during deep > scrubbing (vanilla parameters used in regards to scrubbing). I'm aware this > has been discussed before, but I'd like to share th

Re: [ceph-users] Deep scrubbing

2016-10-24 Thread kefu chai
posting this to ceph-users mailing list. On Tue, Oct 25, 2016 at 2:02 AM, Andrzej Jakowski wrote: > Hi, > > Wanted to learn more on what is the Ceph community take on the deep > scrubbing process. > It seems that deep scrubbing is expected to read data from physical > media: NAND dies or magnetic

Re: [ceph-users] deep scrubbing causes osd down

2015-04-13 Thread 池信泽
Sorry, I am not sure whether it is look ok in your production environment. Maybe you could use the command: ceph tell osd.0 injectargs "-osd_scrub_sleep 0.5" . This command would affect only one osd. If it works fine for some days, you could set for all osd. This is just a suggestion. 2015-04-1

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Lindsay Mathieson
On 13 April 2015 at 16:00, Christian Balzer wrote: > However the vast majority of people with production clusters will be > running something "stable", mostly Firefly at this moment. > > > Sorry, 0.87 is giant. > > > > BTW, you could also set osd_scrub_sleep to your cluster. ceph would > > sleep

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread 池信泽
hi, Loic: Do you think it is patch https://github.com/ceph/ceph/pull/3318 worth of backport to firely and giant? 2015-04-13 14:00 GMT+08:00 Christian Balzer : > > On Mon, 13 Apr 2015 13:42:39 +0800 池信泽 wrote: > > I knew the scheduler was in the pipeline, good to see it made it in. > > How

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Christian Balzer
On Mon, 13 Apr 2015 13:42:39 +0800 池信泽 wrote: I knew the scheduler was in the pipeline, good to see it made it in. However the vast majority of people with production clusters will be running something "stable", mostly Firefly at this moment. > Sorry, 0.87 is giant. > > BTW, you could also set

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread 池信泽
Sorry, 0.87 is giant. BTW, you could also set osd_scrub_sleep to your cluster. ceph would sleep some time as you defined when it has scrub some objects. But I am not sure whether is could works good to you. Thanks. 2015-04-13 13:30 GMT+08:00 池信泽 : > hi, you could restrict scrub to certain times

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread 池信泽
hi, you could restrict scrub to certain times of day based on https://github.com/ceph/ceph/pull/3318. You could set osd_scrub_begin_hour and osd_scrub_begin_hour which are suitable for you. This feature is available since 0.93. But it has not been backport to 0.87 (hammer). 2015-04-13 12:55 GMT+

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Lindsay Mathieson
On 13 April 2015 at 11:02, Christian Balzer wrote: > Yeah, that's a request/question that comes up frequently. > And so far there's no option in Ceph to do that (AFAIK), it would be > really nice along with scheduling options (don't scrub during peak hours), > which have also been talked about. >

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Christian Balzer
Original Message - > > > From: "Jean-Charles Lopez" > > To: "Andrei Mikhailovsky" > > Cc: ceph-users@lists.ceph.com > > Sent: Sunday, 12 April, 2015 5:17:10 PM > > Subject: Re: [ceph-users] deep scrubbing causes osd down > > > Hi andrei

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Andrei Mikhailovsky
on a cluster basis rather than on an osd basis. Andrei - Original Message - > From: "Jean-Charles Lopez" > To: "Andrei Mikhailovsky" > Cc: ceph-users@lists.ceph.com > Sent: Sunday, 12 April, 2015 5:17:10 PM > Subject: Re: [ceph-users] deep scr

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Jean-Charles Lopez
rub/deep-scrubs running at the same time on my cluster. How do I > implement this? > > Thanks > > Andrei > > From: "Andrei Mikhailovsky" > To: "LOPEZ Jean-Charles" > Cc: ceph-users@lists.ceph.com > Sent: Sunday, 12 April, 2015 9:02:05 AM > Su

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Andrei Mikhailovsky
nt: Sunday, 12 April, 2015 9:02:05 AM > Subject: Re: [ceph-users] deep scrubbing causes osd down > JC, > I've implemented the following changes to the ceph.conf and restarted > mons and osds. > osd_scrub_chunk_min = 1 > osd_scrub_chunk_max =5 > Things have become c

Re: [ceph-users] deep scrubbing causes osd down

2015-04-12 Thread Andrei Mikhailovsky
s@lists.ceph.com > Sent: Saturday, 11 April, 2015 7:54:18 PM > Subject: Re: [ceph-users] deep scrubbing causes osd down > Hi Andrei, > 1) what ceph version are you running? > 2) what distro and version are you running? > 3) have you checked the disk elevator for the OSD dev

Re: [ceph-users] deep scrubbing causes osd down

2015-04-11 Thread Andrei Mikhailovsky
d see if it makes a difference. Thanks for your feedback Andrei - Original Message - > From: "LOPEZ Jean-Charles" > To: "Andrei Mikhailovsky" > Cc: "LOPEZ Jean-Charles" , > ceph-users@lists.ceph.com > Sent: Saturday, 11 April, 2015 7:5

Re: [ceph-users] deep scrubbing causes osd down

2015-04-11 Thread LOPEZ Jean-Charles
Hi Andrei, 1) what ceph version are you running? 2) what distro and version are you running? 3) have you checked the disk elevator for the OSD devices to be set to cfq? 4) Have have you considered exploring the following parameters to further tune - osd_scrub_chunk_min lower the default value of

Re: [ceph-users] deep scrubbing causes osd down

2015-04-10 Thread Haomai Wang
It looks like deep scrub cause the disk busy and some threads blocking on this. Maybe you could lower the scrub related configurations and see the disk util when deep-scrubing. On Sat, Apr 11, 2015 at 3:01 AM, Andrei Mikhailovsky wrote: > Hi guys, > > I was wondering if anyone noticed that the d