On Thu, Oct 17, 2019 at 12:35 PM huxia...@horebdata.cn
wrote:
>
> hello, Robert
>
> thanks for the quick reply. I did test with osd op queue = wpq , and osd
> op queue cut off = high
> and
> osd_recovery_op_priority = 1
> osd recovery delay start = 20
> osd recovery max active = 1
> osd reco
.
thanks again,
samuel
huxia...@horebdata.cn
From: Robert LeBlanc
Date: 2019-10-17 21:23
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: Re: [ceph-users] Openstack VM IOPS drops dramatically during Ceph
recovery
On Thu, Oct 17, 2019 at 12:08 PM huxia...@horebdata.cn
wrote:
>
> I ha
On Thu, Oct 17, 2019 at 12:08 PM huxia...@horebdata.cn
wrote:
>
> I happened to find a note that you wrote in Nov 2015:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-November/006173.html
> and I believe this is what i just hit exactly the same behavior : a host down
> will badly tak
-16 21:46
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: Re: [ceph-users] Openstack VM IOPS drops dramatically during Ceph
recovery
On Wed, Oct 16, 2019 at 11:53 AM huxia...@horebdata.cn
wrote:
>
> My Ceph version is Luminuous 12.2.12. Do you think should i upgrade to
> Nautilus
On Wed, Oct 16, 2019 at 11:53 AM huxia...@horebdata.cn
wrote:
>
> My Ceph version is Luminuous 12.2.12. Do you think should i upgrade to
> Nautilus, or will Nautilus have a better control of recovery/backfilling?
We have a Jewel cluster and Luminuous cluster that we have changed
these settings o
: [ceph-users] Openstack VM IOPS drops dramatically during Ceph
recovery
On Thu, Oct 10, 2019 at 2:23 PM huxia...@horebdata.cn
wrote:
>
> Hi, folks,
>
> I have a middle-size Ceph cluster as cinder backup for openstack (queens).
> Duing testing, one Ceph node went down unexpected and powe
On Thu, Oct 10, 2019 at 2:23 PM huxia...@horebdata.cn
wrote:
>
> Hi, folks,
>
> I have a middle-size Ceph cluster as cinder backup for openstack (queens).
> Duing testing, one Ceph node went down unexpected and powered up again ca 10
> minutes later, Ceph cluster starts PG recovery. To my surpri
Hi, folks,
I have a middle-size Ceph cluster as cinder backup for openstack (queens).
Duing testing, one Ceph node went down unexpected and powered up again ca 10
minutes later, Ceph cluster starts PG recovery. To my surprise, VM IOPS drops
dramatically during Ceph recovery, from ca. 13K IOPS