> >
> > On Tue, May 21, 2019, 4:49 AM Jason Dillaman
> wrote:
> >>
> >> On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
> >> >
> >> > Hello cephers,
> >> >
> >> > we have a few systems which utilize a rbd-bd map/m
-s43 mon.0 10.23.27.153:6789/0
>> > 173640 : cluster [WRN] Health check update: 395 slow requests are blocked
>> > > 32 sec. Implicated osds 51 (REQUEST_SLOW)
>> > 2019-05-20 00:04:19.234877 mon.ceph-mon-s43 mon.0 10.23.27.153:6789/0
>> > 173641 : cluster [INF
AM Jason Dillaman wrote:
> On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
> >
> > Hello cephers,
> >
> > we have a few systems which utilize a rbd-bd map/mount to get access to
> a rbd volume.
> > (This problem seems to be related to "[ceph-users]
get access to a rbd
> volume.
> (This problem seems to be related to "[ceph-users] Slow requests from
> bluestore osds" (the original thread))
>
> Unfortunately the rbd-nbd device of a system crashes three mondays in series
> at ~00:00 when the systemd fstrim timer exec
Hello Jason,
Am 20.05.19 um 23:49 schrieb Jason Dillaman:
> On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
>> Hello cephers,
>>
>> we have a few systems which utilize a rbd-bd map/mount to get access to a
>> rbd volume.
>> (This problem seems to be relat
On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
>
> Hello cephers,
>
> we have a few systems which utilize a rbd-bd map/mount to get access to a rbd
> volume.
> (This problem seems to be related to "[ceph-users] Slow requests from
> bluestore osds" (the orig
Hello cephers,
we have a few systems which utilize a rbd-bd map/mount to get access to a rbd
volume.
(This problem seems to be related to "[ceph-users] Slow requests from bluestore
osds" (the original thread))
Unfortunately the rbd-nbd device of a system crashes three mondays in seri
Quoting Marc Schöchlin (m...@256bit.org):
> Out new setup is now:
> (12.2.10 on Ubuntu 16.04)
>
> [osd]
> osd deep scrub interval = 2592000
> osd scrub begin hour = 19
> osd scrub end hour = 6
> osd scrub load threshold = 6
> osd scrub sleep = 0.3
> osd snap trim sleep = 0.4
> pg max concurrent s
ome seconds(SSD) to minutes(HDD) and
> perform a compact of OMAP database.
>
> Regards,
>
>
>
>
> -Mensaje original-
> De: ceph-users En nombre de Marc Schöchlin
> Enviado el: lunes, 13 de mayo de 2019 6:59
> Para: ceph-users@lists.ceph.com
> Asunto: Re: [ceph-
mayo de 2019 6:59
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] Slow requests from bluestore osds
Hello cephers,
one week ago we replaced the bluestore cache size by "osd memory target" and
removed the detail memory settings.
This storage class now runs 42*8GB spinners with a
Hello cephers,
one week ago we replaced the bluestore cache size by "osd memory target" and
removed the detail memory settings.
This storage class now runs 42*8GB spinners with a permanent write workload of
2000-3000 write IOPS, and 1200-8000 read IOPS.
Out new setup is now:
(12.2.10 on Ubuntu
Hello cephers,
as described - we also have the slow requests in our setup.
We recently updated from ceph 12.2.4 to 12.2.10, updated Ubuntu 16.04 to the
latest patchlevel (with kernel 4.15.0-43) and applied dell firmware 2.8.0.
On 12.2.5 (before updating the cluster) we had in a frequency of 10m
I solved my slow requests by increasing the size of block.db. Calculate 4% per
stored TB and preferably host the DB in NVME.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Uwe,
as described in my mail we are running 4.13.0-39.
In conjunction with some later mails of this thread it seems that this problem
might related to os/microcode (spectre) updates.
I am planning a ceph/ubuntu upgrade in the next week because of various
reasons, let's see what happens...
On Sat, Sep 01, 2018 at 12:45:06PM -0400, Brett Chancellor wrote:
> Hi Cephers,
> I am in the process of upgrading a cluster from Filestore to bluestore,
> but I'm concerned about frequent warnings popping up against the new
> bluestore devices. I'm frequently seeing messages like this, although
Mine is currently at 1000 due to the high number of pgs we had coming from
Jewel. I do find it odd that only the bluestore OSDs have this issue.
Filestore OSDs seem to be unaffected.
On Wed, Sep 5, 2018, 3:43 PM Samuel Taylor Liston
wrote:
> Just a thought - have you looked at increasing your "—
Just a thought - have you looked at increasing your "—mon_max_pg_per_osd” both
on the mons and osds? I was having a similar issue while trying to add more
OSDs to my cluster (12.2.27, CentOS7.5, 3.10.0-862.9.1.el7.x86_64). I
increased mine to 300 temporarily while adding OSDs and stopped havi
I've experienced the same thing during scrubbing and/or any kind of
expansion activity.
*Daniel Pryor*
On Mon, Sep 3, 2018 at 2:13 AM Marc Schöchlin wrote:
> Hi,
>
> we are also experiencing this type of behavior for some weeks on our not
> so performance critical hdd pools.
> We haven't spent
I'm running Centos 7.5. If I turn off spectre/meltdown protection then a
security sweep will disconnect it from the network.
-Brett
On Wed, Sep 5, 2018 at 2:24 PM, Uwe Sauter wrote:
> I'm also experiencing slow requests though I cannot point it to scrubbing.
>
> Which kernel do you run? Would y
I'm also experiencing slow requests though I cannot point it to scrubbing.
Which kernel do you run? Would you be able to test against the same kernel with Spectre/Meltdown mitigations disabled
("noibrs noibpb nopti nospectre_v2" as boot option)?
Uwe
Am 05.09.18 um 19:30 schrieb Brett
Marc,
As with you, this problem manifests itself only when the bluestore OSD is
involved in some form of deep scrub. Anybody have any insight on what
might be causing this?
-Brett
On Mon, Sep 3, 2018 at 4:13 AM, Marc Schöchlin wrote:
> Hi,
>
> we are also experiencing this type of behavior f
Hi,
we are also experiencing this type of behavior for some weeks on our not
so performance critical hdd pools.
We haven't spent so much time on this problem, because there are
currently more important tasks - but here are a few details:
Running the following loop results in the following output:
The warnings look like this.
6 ops are blocked > 32.768 sec on osd.219
1 osds have slow requests
On Sun, Sep 2, 2018, 8:45 AM Alfredo Deza wrote:
> On Sat, Sep 1, 2018 at 12:45 PM, Brett Chancellor
> wrote:
> > Hi Cephers,
> > I am in the process of upgrading a cluster from Filestore to blue
On Sat, Sep 1, 2018 at 12:45 PM, Brett Chancellor
wrote:
> Hi Cephers,
> I am in the process of upgrading a cluster from Filestore to bluestore,
> but I'm concerned about frequent warnings popping up against the new
> bluestore devices. I'm frequently seeing messages like this, although the
> sp
Hi Cephers,
I am in the process of upgrading a cluster from Filestore to bluestore,
but I'm concerned about frequent warnings popping up against the new
bluestore devices. I'm frequently seeing messages like this, although the
specific osd changes, it's always one of the few hosts I've converted
25 matches
Mail list logo