https://lists.ceph.io/hyperkitty/search?mlist=ceph-users%40ceph.io&q=Identify+slow+ops
On Fri, Feb 18, 2022 at 12:30 AM Alvaro Soto wrote:
> Hi there...
> there is an old thread about this, search for "[ceph-users] Identify slow
> ops" in older emails ->
> https://lists.ceph.io/hyperkitty/list/
Hi there...
there is an old thread about this, search for "[ceph-users] Identify slow
ops" in older emails ->
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EOWNO3MDYRUZKAK6RMQBQ5WBPQNLHOPV/
Cheers!
On Thu, Feb 17, 2022 at 11:44 PM Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com>
Wissem,
unfortunately there is no way to learn if zombies has appeared other than
runnig fsck. But I think this can be perfomed on a weekly or even monthly basis
- from my experience getting 32K zombies is a pretty rare case. But definitely
it's more reliable if you collect that statistics from
De : Igor Fedotov mailto:igor.fedo...@croit.io>>
Envoyé : jeudi 17 février 2022 16:01
À : Wissem MIMOUNA
mailto:wissem.mimo...@fiducialcloud.fr>>
Objet : Re: [ceph-users] OSDs crash randomnisly
Wissem, unfortunately there is no way to learn if zombies has appeared other
than runnig fsck. Bu
De : Igor Fedotov
Envoyé : jeudi 17 février 2022 16:01
À : Wissem MIMOUNA
Objet : Re: [ceph-users] OSDs crash randomnisly
Wissem, unfortunately there is no way to learn if zombies has appeared other
than runnig fsck. But I think this can be perfomed on a weekly or even monthly
basis - from m
Starting now!
On Thu, Feb 10, 2022 at 3:00 PM Neha Ojha wrote:
>
> Hi everyone,
>
> This month's Ceph User + Dev Monthly meetup is on February 17,
> 15:00-16:00 UTC. Please add topics you'd like to discuss in the agenda
> here: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. We are
> hoping
Hi,
Some of our RGW and RBD clusters are still using cache tier. We
encountered some issues before and then set it up for some suitable
scenario, for example, write performance sensitive business without
heavy load and also tried to keep the full ratio low. So far so good,
no serious issue happene
Hi Wissem,
first of all the bug wasn't fixed with the PR you're referring - it just
added additional log output on the problem detection.
Unfortunately the bug isn't fixed yet as the root cause for zombie
spanning blobs appearance is still unclear. The relevant ticket is
https://tracker.cep
Dear,
Some ODSs on our ceph cluster crush with no explication .
Stop and Start of the crushed OSD daemon fixed the issue but this happend few
times and I just need to understand the reason.
For your information the error has been fixed in the log change in the octopus
release (https://github.com
Hi,
I’m doing my first steps with CEPH. I’ve installed a cluster following this
guide
(https://ralph.blog.imixs.com/2021/10/03/ceph-pacific-running-on-debian-11-bullseye/)
and have a pacific release (16.2.7) running with docker containers on debian
hosts.
I’ve then deployed the rgw daemon and
10 matches
Mail list logo