Hi! Thanks for your help.
How can I increase interval of history for command ceph daemon osd.<id>
dump_historic_ops? It shows only for several minutes.
I see slow requests on random osds each time and on different hosts (there
are three). As I see in logs the problem doesn't relate to scrubbing.

Regards,
Olga Ukhina


2017-10-20 4:42 GMT+03:00 Brad Hubbard <bhubb...@redhat.com>:

> I guess you have both read and followed
> http://docs.ceph.com/docs/master/rados/troubleshooting/
> troubleshooting-osd/?highlight=backfill#debugging-slow-requests
>
> What was the result?
>
> On Fri, Oct 20, 2017 at 2:50 AM, J David <j.david.li...@gmail.com> wrote:
> > On Wed, Oct 18, 2017 at 8:12 AM, Ольга Ухина <olga.uh...@gmail.com>
> wrote:
> >> I have a problem with ceph luminous 12.2.1.
> >> […]
> >> I have slow requests on different OSDs on random time (for example at
> night,
> >> but I don’t see any problems at the time of problem
> >> […]
> >> 2017-10-18 01:20:38.187326 mon.st3 mon.0 10.192.1.78:6789/0 22689 :
> cluster
> >> [WRN] Health check update: 49 slow requests are blocked > 32 sec
> >> (REQUEST_SLOW)
> >
> > This looks almost exactly like what we have been experiencing, and
> > your use-case (Proxmox client using rbd) is the same as ours as well.
> >
> > Unfortunately we were not able to find the source of the issue so far,
> > and haven’t gotten much feedback from the list.  Extensive testing of
> > every component has ruled out any hardware issue we can think of.
> >
> > Originally we thought our issue was related to deep-scrub, but that
> > now appears not to be the case, as it happens even when nothing is
> > being deep-scrubbed.  Nonetheless, although they aren’t the cause,
> > they definitely make the problem much worse.  So you may want to check
> > to see if deep-scrub operations are happening at the times where you
> > see issues and (if so) whether the OSDs participating in the
> > deep-scrub are the same ones reporting slow requests.
> >
> > Hopefully you have better luck finding/fixing this than we have!  It’s
> > definitely been a very frustrating issue for us.
> >
> > Thanks!
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Cheers,
> Brad
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to