Thanks Sage.
We have older version Ceph (Firefly version), where I could not see
this behavior (is on Saturday, after enabling the deep-scrub @ 7AM
hours, I could not see the deep-scrub started for Friday's PG
deep-scrub)).
Is the deep-scrub's performed based on the time-stamp? For ex: a PG
deep-sc
On Mon, 3 Apr 2017, M Ranga Swami Reddy wrote:
> + ceph-devel
>
> On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy
> wrote:
> > Hello,
> > I use a ceph cluster and its show the deeps scrub's PG distribution as below
> > from "ceph pg dump" command:
> >
> >
> >2000 Friday
> >100
+ ceph-devel
On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy
wrote:
> Hello,
> I use a ceph cluster and its show the deeps scrub's PG distribution as below
> from "ceph pg dump" command:
>
>
>2000 Friday
>1000 Saturday
>4000 Sunday
> ==
>
> On Friday, I have disabled the
I use a ceph cluster and its show the deeps scrub's PG distribution as
below from "ceph pg dump" command:
2000 Friday
1000 Saturday
4000 Sunday
==
On Friday, I have disabled the deep-scrub due to some reason. If this case,
all Friday's PG deep-scrub will be performed on Saturday
gt; To: 'Robert LeBlanc' ; 'Eugen Block'
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] deep-scrubbing has large impact on performance
>
> Thanks for the tip Robert, much appreciated.
>
> > -Original Message-
> > From: Robert LeBla
Thanks for the tip Robert, much appreciated.
> -Original Message-
> From: Robert LeBlanc [mailto:rob...@leblancnet.us]
> Sent: 23 November 2016 00:54
> To: Eugen Block
> Cc: Nick Fisk ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] deep-scrubbing has large imp
at 5:34 AM, Eugen Block wrote:
> Thank you!
>
>
> Zitat von Nick Fisk :
>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Eugen Block
>>> Sent: 22 November 2016 10:11
>>> To: Nic
Thank you!
Zitat von Nick Fisk :
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Behalf Of Eugen Block
Sent: 22 November 2016 10:11
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] deep-scrubbing has large impact on
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eugen Block
> Sent: 22 November 2016 10:11
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] deep-scrubbing has large impact on performance
>
Thanks for the very quick answer!
If you are using Jewel
We are still using Hammer (0.94.7), we wanted to upgrade to Jewel in a
couple of weeks, would you recommend to do it now?
Zitat von Nick Fisk :
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eugen Block
> Sent: 22 November 2016 09:55
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] deep-scrubbing has large impact on performance
>
> Hi list,
>
> I've been searching the mai
Am 08.11.2016 um 10:17 schrieb Kees Meijs:
> Hi,
>
> As promised, our findings so far:
>
> * For the time being, the new scrubbing parameters work well.
Which parameters do you refer to? Currently we're on hammer.
> * Using CFQ for spinners and NOOP voor SSD seems to spread load over
>
Hi,
As promised, our findings so far:
* For the time being, the new scrubbing parameters work well.
* Using CFQ for spinners and NOOP voor SSD seems to spread load over
the storage cluster a little better than deadline does. However,
overall latency seems (just a feeling, no numbers t
I would suggest deadline for any ssd, nvme ssd, or pcie flash card. But, you
will need to supply the deadline settings too or deadline won't be any
different than running with noop
Rick
Sent from my iPhone, please excuse any typing errors.
> On Oct 31, 2016, at 5:01 AM, Wido den Hollander wr
> Op 28 oktober 2016 om 15:37 schreef Kees Meijs :
>
>
> Hi,
>
> Interesting... We're now running using deadline. In other posts I read
> about noop for SSDs instead of CFQ.
>
> Since we're using spinners with SSD journals; does it make since to mix
> the scheduler? E.g. CFG for spinners _and_
Hi,
Interesting... We're now running using deadline. In other posts I read
about noop for SSDs instead of CFQ.
Since we're using spinners with SSD journals; does it make since to mix
the scheduler? E.g. CFG for spinners _and_ noop for SSD?
K.
On 28-10-16 14:43, Wido den Hollander wrote:
> Make
> Op 28 oktober 2016 om 13:18 schreef Kees Meijs :
>
>
> Hi,
>
> On 28-10-16 12:06, w...@42on.com wrote:
> > I don't like this personally. Your cluster should be capable of doing
> > a deep scrub at any moment. If not it will also not be able to handle
> > a node failure during peak times.
>
>
Hi,
On 28-10-16 12:06, w...@42on.com wrote:
> I don't like this personally. Your cluster should be capable of doing
> a deep scrub at any moment. If not it will also not be able to handle
> a node failure during peak times.
Valid point and I totally agree. Unfortunately, the current load doesn't
> Op 28 okt. 2016 om 11:52 heeft Kees Meijs het volgende
> geschreven:
>
> Hi Cephers,
>
> Using Ceph 0.94.9-1trusty we noticed severe I/O stalling during deep
> scrubbing (vanilla parameters used in regards to scrubbing). I'm aware this
> has been discussed before, but I'd like to share th
posting this to ceph-users mailing list.
On Tue, Oct 25, 2016 at 2:02 AM, Andrzej Jakowski
wrote:
> Hi,
>
> Wanted to learn more on what is the Ceph community take on the deep
> scrubbing process.
> It seems that deep scrubbing is expected to read data from physical
> media: NAND dies or magnetic
Sorry, I am not sure whether it is look ok in your production environment.
Maybe you could use the command: ceph tell osd.0 injectargs
"-osd_scrub_sleep 0.5" . This command would affect only one osd.
If it works fine for some days, you could set for all osd.
This is just a suggestion.
2015-04-1
On 13 April 2015 at 16:00, Christian Balzer wrote:
> However the vast majority of people with production clusters will be
> running something "stable", mostly Firefly at this moment.
>
> > Sorry, 0.87 is giant.
> >
> > BTW, you could also set osd_scrub_sleep to your cluster. ceph would
> > sleep
hi, Loic:
Do you think it is patch https://github.com/ceph/ceph/pull/3318
worth of backport to firely and giant?
2015-04-13 14:00 GMT+08:00 Christian Balzer :
>
> On Mon, 13 Apr 2015 13:42:39 +0800 池信泽 wrote:
>
> I knew the scheduler was in the pipeline, good to see it made it in.
>
> How
On Mon, 13 Apr 2015 13:42:39 +0800 池信泽 wrote:
I knew the scheduler was in the pipeline, good to see it made it in.
However the vast majority of people with production clusters will be
running something "stable", mostly Firefly at this moment.
> Sorry, 0.87 is giant.
>
> BTW, you could also set
Sorry, 0.87 is giant.
BTW, you could also set osd_scrub_sleep to your cluster. ceph would
sleep some time as you defined when it has scrub some objects.
But I am not sure whether is could works good to you.
Thanks.
2015-04-13 13:30 GMT+08:00 池信泽 :
> hi, you could restrict scrub to certain times
hi, you could restrict scrub to certain times of day based on
https://github.com/ceph/ceph/pull/3318.
You could set osd_scrub_begin_hour and osd_scrub_begin_hour which are
suitable for you. This feature is available since 0.93.
But it has not been backport to 0.87 (hammer).
2015-04-13 12:55 GMT+
On 13 April 2015 at 11:02, Christian Balzer wrote:
> Yeah, that's a request/question that comes up frequently.
> And so far there's no option in Ceph to do that (AFAIK), it would be
> really nice along with scheduling options (don't scrub during peak hours),
> which have also been talked about.
>
Original Message -
>
> > From: "Jean-Charles Lopez"
> > To: "Andrei Mikhailovsky"
> > Cc: ceph-users@lists.ceph.com
> > Sent: Sunday, 12 April, 2015 5:17:10 PM
> > Subject: Re: [ceph-users] deep scrubbing causes osd down
>
> > Hi andrei
on a cluster basis rather than
on an osd basis.
Andrei
- Original Message -
> From: "Jean-Charles Lopez"
> To: "Andrei Mikhailovsky"
> Cc: ceph-users@lists.ceph.com
> Sent: Sunday, 12 April, 2015 5:17:10 PM
> Subject: Re: [ceph-users] deep scr
rub/deep-scrubs running at the same time on my cluster. How do I
> implement this?
>
> Thanks
>
> Andrei
>
> From: "Andrei Mikhailovsky"
> To: "LOPEZ Jean-Charles"
> Cc: ceph-users@lists.ceph.com
> Sent: Sunday, 12 April, 2015 9:02:05 AM
> Su
nt: Sunday, 12 April, 2015 9:02:05 AM
> Subject: Re: [ceph-users] deep scrubbing causes osd down
> JC,
> I've implemented the following changes to the ceph.conf and restarted
> mons and osds.
> osd_scrub_chunk_min = 1
> osd_scrub_chunk_max =5
> Things have become c
s@lists.ceph.com
> Sent: Saturday, 11 April, 2015 7:54:18 PM
> Subject: Re: [ceph-users] deep scrubbing causes osd down
> Hi Andrei,
> 1) what ceph version are you running?
> 2) what distro and version are you running?
> 3) have you checked the disk elevator for the OSD dev
d see if it makes a
difference.
Thanks for your feedback
Andrei
- Original Message -
> From: "LOPEZ Jean-Charles"
> To: "Andrei Mikhailovsky"
> Cc: "LOPEZ Jean-Charles" ,
> ceph-users@lists.ceph.com
> Sent: Saturday, 11 April, 2015 7:5
Hi Andrei,
1) what ceph version are you running?
2) what distro and version are you running?
3) have you checked the disk elevator for the OSD devices to be set to cfq?
4) Have have you considered exploring the following parameters to further tune
- osd_scrub_chunk_min lower the default value of
It looks like deep scrub cause the disk busy and some threads blocking on this.
Maybe you could lower the scrub related configurations and see the
disk util when deep-scrubing.
On Sat, Apr 11, 2015 at 3:01 AM, Andrei Mikhailovsky wrote:
> Hi guys,
>
> I was wondering if anyone noticed that the d
35 matches
Mail list logo