Hi,
this is a bit older cluster (Nautilus, bluestore only).
We’ve noticed that the cluster is almost continuously repairing PGs. However,
they all finish successfully with “0 fixed”. We do not see the trigger why Ceph
decides to repair the PGs and it’s happening for a lot of PGs, not any specif
Hi,
it sounds like you have auto-repair enabled (osd_scrub_auto_repair). I
guess you could disable that to see what's going on with the PGs and
their replicas. And/or you could enable debug logs. Are all daemons
running the same ceph (minor) version? I remember a customer case
where diffe
Hi,
in older versions of ceph with the auto-repair feature the PG state of
scrubbing PGs had always the repair state as well.
With later versions (I don't know exactly at which version) ceph
differentiated scrubbing and repair again in the PG state.
I think as long as there are no errors loged al
Hi Ondřej,
As you said, you can't add a new header in the response, but maybe you can
override one of the existing response fields?
e.g. Request.Response.Message
let me know if that works for you?
Yuval
On Mon, Sep 4, 2023 at 1:33 PM Ondřej Kukla wrote:
> Hello,
>
> We have a RGW setup that
I solved this.
It has multiple layers.
1. RGW_API_HOST is no longer available in 17.2.6 as a configuration option for
the ceph mgr. (I was wrong below when I said it could be queried on an
*upgraded* host with:
# ceph dashboard get-rgw-api-host
You *can* query it with:
# ceph config dump | grep
ceph orch device ls - output ( insufficient space ( 10 extents) on vgs lvm
detected locked ) Quincy version , Is this just warning or any action should be
taken.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
Hello there,
could you perhaps provide some more information on how (or where) this
got fixed? It doesn't seem to be fixed yet on the latest Ceph Quincy
and Reef versions, but maybe I'm mistaken. I've provided some more
context regarding this below, in case that helps.
On Ceph Quincy 17.2.6 I'm