Based on original concept of *osd_max_backfills* which prevents the
following:
"*situationIf all of these backfills happen simultaneously, it would put
excessive load on the osd.*"
the value of "osd_max_backfills" could be important in some situation. So
we might not be able to say how it's impor
hi,jan
2015-06-01 15:43 GMT+08:00 Jan Schermer :
> We had to disable deep scrub or the cluster would me unusable - we need to
> turn it back on sooner or later, though.
> With minimal scrubbing and recovery settings, everything is mostly good.
> Turned out many issues we had were due to too few
>From a ease of use standpoint and depending on the situation you are
setting up your environment, the idea is as follow;
It seems like it would be nice to have some easy on demand control where
you don't have to think a whole lot other than knowing how it is going to
affect your cluster in a gene
With a write-heavy RBD workload, I add the following to ceph.conf:
osd_max_backfills = 2
osd_recovery_max_active = 2
If things are going well during recovery (i.e. guests happy and no slow
requests), I will often bump both up to three:
# ceph tell osd.* injectargs '--osd-max-backfills 3
--os
On Wed, Jun 3, 2015 at 3:44 PM, Sage Weil wrote:
> On Mon, 1 Jun 2015, Gregory Farnum wrote:
>> On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
>> wrote:
>> > On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum wrote:
>> >> On Fri, May 29, 2015 at 2:47 PM, Samuel Just wrote:
>> >> > Many people h
On Mon, 1 Jun 2015, Gregory Farnum wrote:
> On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
> wrote:
> > On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum wrote:
> >> On Fri, May 29, 2015 at 2:47 PM, Samuel Just wrote:
> >> > Many people have reported that they need to lower the osd recovery
>
On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
wrote:
> On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum wrote:
>> On Fri, May 29, 2015 at 2:47 PM, Samuel Just wrote:
>> > Many people have reported that they need to lower the osd recovery config
>> > options to minimize the impact of recovery
On 06/01/2015 05:34 PM, Wang, Warren wrote:
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x repl
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x replication, with the trade off of better
client r
On 05/29/2015 04:47 PM, Samuel Just wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking about
changing the defaults as follows:
osd_max_backfills to 1 (from 10)
osd_recovery_max_active to 3 (f
Slow requests are not exactly tied to the PG number, but we were getting slow
requests whenever backfills or recoveries fired up - increasing the number of
PGs helped with this as the “blocks” of work are much smaller than before.
We have roughly the same number of OSDs as you but only one reall
On 06/01/15 09:43, Jan Schermer wrote:
> We had to disable deep scrub or the cluster would me unusable - we need to
> turn it back on sooner or later, though.
> With minimal scrubbing and recovery settings, everything is mostly good.
> Turned out many issues we had were due to too few PGs - once
We had to disable deep scrub or the cluster would me unusable - we need to turn
it back on sooner or later, though.
With minimal scrubbing and recovery settings, everything is mostly good. Turned
out many issues we had were due to too few PGs - once we increased them from 4K
to 16K everything sp
On Fri, May 29, 2015 at 5:47 PM, Samuel Just wrote:
> Many people have reported that they need to lower the osd recovery config
> options to minimize the impact of recovery on client io. We are talking
> about changing the defaults as follows:
>
> osd_max_backfills to 1 (from 10)
> osd_recovery
On Fri, May 29, 2015 at 2:47 PM, Samuel Just wrote:
> Many people have reported that they need to lower the osd recovery config
> options to minimize the impact of recovery on client io. We are talking
> about changing the defaults as follows:
>
> osd_max_backfills to 1 (from 10)
> osd_recovery
To: Samuel Just mailto:sj...@redhat.com>>, ceph-devel
mailto:ceph-de...@vger.kernel.org>>,
"'ceph-users@lists.ceph.com'
(ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>)"
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] Discuss: New
Sam,
We are seeing some good client IO results during recovery by using the
following values..
osd recovery max active = 1
osd max backfills = 1
osd recovery threads = 1
osd recovery op priority = 1
It is all flash though. The recovery time in case of entire node (~120 TB)
failure/a single dri
Hi,
We did it the other way around instead, defining a period where the load is
lighter and turn off/on backfill/recover. Then you want the backfill values
to be the what is default right now.
Also, someone said that (think it was Greg?) If you have problems with
backfill, your cluster backing st
On Fri, May 29, 2015 at 5:47 PM, Samuel Just wrote:
> Many people have reported that they need to lower the osd recovery config
> options to minimize the impact of recovery on client io. We are talking
> about changing the defaults as follows:
>
> osd_max_backfills to 1 (from 10)
> osd_recovery
19 matches
Mail list logo