Hi,
we have same situation with one PG on our different cluster. Scrubs and
deep-scrubs are running over and over for same PG (38.34). I've logged some
period with deep-scrub and some scrubs repeating. OSD log form primary osd
can be found there:
https://www.dropbox.com/s/njmixbgzkfo1wws/ceph-osd.
Ah, same question then. If we can get logging on the primary for one
of those pgs, it should be fairly obvious.
-Sam
On Wed, Sep 21, 2016 at 4:08 AM, Pavan Rallabhandi
wrote:
> We find this as well in our fresh built Jewel clusters, and seems to happen
> only with a handful of PGs from couple o
We find this as well in our fresh built Jewel clusters, and seems to happen
only with a handful of PGs from couple of pools.
Thanks!
On 9/21/16, 3:14 PM, "ceph-users on behalf of Tobias Böhm"
wrote:
Hi,
there is an open bug in the tracker: http://tracker.ceph.com/issues/16474
Can you reproduce with logging on the primary for that pg?
debug osd = 20
debug filestore = 20
debug ms = 1
Since restarting the osd may be a workaround, can you inject the debug
values without restarting the daemon?
-Sam
On Wed, Sep 21, 2016 at 2:44 AM, Tobias Böhm wrote:
> Hi,
>
> there is an
Hi,
there is an open bug in the tracker: http://tracker.ceph.com/issues/16474
It also suggests restarting OSDs as a workaround. We faced the same issue after
increasing the number of PGs in our cluster and restarting OSDs solved it as
well.
Tobias
> Am 21.09.2016 um 11:26 schrieb Dan van der
There was a thread about this a few days ago:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
And the OP found a workaround.
Looks like a bug though... (by default PGs scrub at most once per day).
-- dan
On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau wrote:
> He