Hi,
I haven't dealt with this myself yet, but the docs [0] state:
A bug was discovered in root_squash which would potentially lose
changes made by a client restricted with root_squash caps. The fix
required a change to the protocol and a client upgrade is required.
This is a HEALTH_ERR war
: Pull failed on cluster upgrade
The upgrade ended successfully, but now the cluster reports this error:
MDS_CLIENTS_BROKEN_ROOTSQUASH: 1 MDS report clients with broken
root_squash implementation
From what I understood this is due to a new feature meant to fix a bug
in the root_squash
The upgrade ended successfully, but now the cluster reports this error:
MDS_CLIENTS_BROKEN_ROOTSQUASH: 1 MDS report clients with broken
root_squash implementation
From what I understood this is due to a new feature meant to fix a bug
in the root_squash implementation, and that will be relea
In the end I built up an image based on Ubuntu 22.04 which does not
mandate x86-64-v2. I installed the official Ceph packages and hacked
here and there (e.g. it was necessary to set the uid and gid of the Ceph
user and group identical to those used by the CentOS Stream 8 image to
avoid to mess
Thank you Konstantin, as it was foreseeable this problem didn't hit just
me. So I hope the build of images based on CentOS Stream 8 will be
resumed. Otherwise I'll try to build myself.
Nicola
smime.p7s
Description: S/MIME Cryptographic Signature
___
Hi,
> On 7 Aug 2024, at 10:31, Nicola Mori wrote:
>
> Unfortunately I'm on bare metal, with very old hardware so I cannot do much.
> I'd try to build a Ceph image based on Rocky Linux 8 if I could get the
> Dockerfile of the current image to start with, but I've not been able to find
> it. Ca
Unfortunately I'm on bare metal, with very old hardware so I cannot do
much. I'd try to build a Ceph image based on Rocky Linux 8 if I could
get the Dockerfile of the current image to start with, but I've not been
able to find it. Can you please help me with this?
Cheers,
Nicola
smime.p7s
De
If you're using VMs,
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/6X6QIEMWDYSA6XOKEYH5OJ4TIQSBD5BL/
might be relevant
On Tue, Aug 6, 2024 at 3:21 AM Nicola Mori wrote:
> I think I found the problem. Setting the cephadm log level to debug and
> then watching the logs during th
What operating system/distribution are you running? What hardware?
David
On Tue, Aug 6, 2024, at 02:20, Nicola Mori wrote:
> I think I found the problem. Setting the cephadm log level to debug and
> then watching the logs during the upgrade:
>
>ceph config set mgr mgr/cephadm/log_to_cluster_
I think I found the problem. Setting the cephadm log level to debug and
then watching the logs during the upgrade:
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
ceph -W cephadm --watch-debug
I found this line just before the error:
ceph: stderr Fatal glibc error: CPU does no
On 05.08.24 18:38, Nicola Mori wrote:
docker.io/snack14/ceph-wizard
This is not an official container image.
The images from the Ceph project are on quay.io/ceph/ceph.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 03
11 matches
Mail list logo