Hello Eugen,
Hi (please don't drop the ML from your responses),
Sorry. I didn't pay attention. I will.
All PGs of pool cephfs are affected and they are in all OSDs
then just pick a random one and check if anything stands out. I'm not
sure if you mentioned it already, did you also try rest
Hi,
Is there any command-line history available to get at least some sort
of history of events?
Are all MONs down or has one survived?
Could he have tried to change IP addresses or something? There's an
old blog post [0] explaining how to clean up. And here's some more
reading [1] how to m
2024年8月14日(水) 8:23 Raphaël Ducom :
>
> Hi
>
> I'm reaching out to check on the status of the XFS deadlock issue with RBD
> in hyperconverged environments, as detailed in Ceph tracker issue #43910 (
> https://tracker.ceph.com/issues/43910?tab=history). It looks like there
> hasn’t been much activity
In the end I built up an image based on Ubuntu 22.04 which does not
mandate x86-64-v2. I installed the official Ceph packages and hacked
here and there (e.g. it was necessary to set the uid and gid of the Ceph
user and group identical to those used by the CentOS Stream 8 image to
avoid to mess
The upgrade ended successfully, but now the cluster reports this error:
MDS_CLIENTS_BROKEN_ROOTSQUASH: 1 MDS report clients with broken
root_squash implementation
From what I understood this is due to a new feature meant to fix a bug
in the root_squash implementation, and that will be relea
One mon survived (it took us a while to find it since it was in a damaged
state), and we have since been able to create a new second mon where an old
mon was - quorum has been re-established. We are not able to use `ceph
orch` now to deploy new mons though, it is giving us an error from the
keyring
Hi Nicola,
You might want to post in the ceph-dev list about this or discuss it with devs
in the ceph-devel slack channel for quicker help.
Bests,
Frédéric.
De : Nicola Mori
Envoyé : mercredi 21 août 2024 15:52
À : ceph-users@ceph.io
Objet : [ceph-users] Re: Pu