Hi,
just a quick question:
Is a mixed cluster with nodes running AMD64 and ARM64 CPUs possible?
Is the cephadm orchestrator able to manage such a cluster?
Regards
--
Robert Sander
Linux Consultant
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel
Hi,
we have been running such mixed cluster for almost 2 years.
It works without any cephadm related issues. (Even during upgrade form
ubuntu 20.04 to 22.04 mixing nodes with docker and podman caused no issues.)
Just be careful when upgrading, ceph arm64 containers for 18.2.4 are
completely b
Hi, we have access and we can mount cephfs volume but for some reason data in
it can be listes but when we try to copy some files from it , some can be and
some hangs
Is there a way to fix it or skip the files we cannot access ?
Best regards
___
ceph-u
Den ons 5 mars 2025 kl 01:59 skrev 小小 <13071007...@163.com>:
> Hi,
>I'm facing a critical issue with my Ceph cluster. It has become unable to
> read/write data properly and cannot recover normally. What steps should I
> take to resolve this?
Did you do anything to the cluster, or did anythin
Hello everyone,
I'm seeing some behaviour in CephFS that strikes me as unexpected, and I
wonder if others have thoughts about it.
Consider this scenario:
* Ceph Reef (18.2.4) deployed with Cephadm running on Ubuntu Jammy,
CephFS client is running kernel 5.15.0-133-generic.
* CephFS is mounte
Hi everyone,
We have a Ceph multisite setup with a zonegroup containing three zones: one
master and two secondary zones, one of which is designated as an archive.
After recently upgrading to Ceph 19.2.1, I attempted to remove the archive
zone from the zonegroup following the steps in the Ceph docu
And do you also have the device_health_metrics pool? During one of the
upgrades to Quincy or so the older device_health_metrics should have
been renamed. But on one customer cluster I found that both were still
there, although that didn't cause any trouble. I don't really fully
grasp yet wh
Hi,
I've had similar issues before.
radosgw-admin zonegroup/zone tooling is really bad in general.
As far as I know you should remove zone before deleting it form
zonegroup but I don't think I've ever managed to remove zone cleanly
from the cluster. (I would be thankful if someone shared a co
Hi,
just some assumptions based on my experience with cephfs:
- you cannot change existing objects; setting a different pool will not
automagically move data. the data pool seems to be stored in the inode
information
- this also explains why changing the pool of a file does not work. ceph
On Wed, Mar 5, 2025 at 9:34 AM Adam Prycki wrote:
> Hi,
>
> I've had similar issues before.
> radosgw-admin zonegroup/zone tooling is really bad in general.
>
>
> As far as I know you should remove zone before deleting it form
> zonegroup but I don't think I've ever managed to remove zone cleanly
Hi Florian,
Point 1 is certainly a bug regarding the choice of terms in the response
(confusion between file and directory).
Point 2 is known (cf. https://ewal.dev/cephfs-migrating-files-between-pools)
and described in the documentation: only new files are written to the new pool
after setting
thanks but it seems another issue.
Is there any way to upgrade mgr without orchestrator ? or any other services ?
We are online but cannot give any command to cluster
alertmanager.ceph01 ceph01 *:9093,9094 running (10h) 6h ago 21M
63.1M- 0.25.0 c8568f914cd2 0bf1d5
I wonder if you are hitting the same issue as this.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/D3PHDKKPL7LUHLAMSCO4Y5DESMKIA4FP/
If so, you might have to play around failing the manager so you can stop
the upgrade. AFAIK downgrading versions is not something that is tested a
I would even state updating ceph without the containers is easier.
>
> Is it possible to upgrade mgr without using ceph orch ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi , we have issue using ceph orch , it doesn't send any command for real.
We try to restart or redeploy but nothing works
This happened downgrading from 17.2.8 to 17.2.6
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
This is certainly intended behavior. If you checked the layout on the
particular file, you would see it hasn’t changed. Directory layouts are the
default for new files, not a control mechanism for existing files.
It might be confusing, so we can talk about different presentations if
there’s a bett
Is it possible to upgrade mgr without using ceph orch ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
17 matches
Mail list logo