Hi,
cluster_network and public_network are read when a daemon starts up in
order to decide which interface to bind to, and which IP address /
port to advertise to the rest of the cluster.
So you can normally modify any of those, as long as all the daemons
can still reach each other on the new IPs
Hello, I have installed a ceph cluster, version v17.2.8, the
cluster has public_network and cluster_network. For some other reason, I want
to temporarily remove cluster_network and let the service data, heartbeat
detection, and data recovery be done on public_network. Is this achievable?
Aft
> I'm wondering about the influence of WAL/DBs collocated on HDDs on OSD
> creation time, OSD startup time, peering and osdmap updates, and the role it
> might play regarding flapping, when DB IOs compete with client IOs, even with
> 100% active+clean PGs.
FWIW, having encountered these long-st
On 07/01/2025, Adam Emerson wrote:
> On 16/12/2024, Yuri Weinstein wrote:
> > rgw - Eric, Adam E
>
> Approved for RGW. Failures were in tests and we've got fixes for those now.
I apologize, but I am going to have to block for a critical fix. I
will try to have it up and in today or tomorrow.
On Thursday, January 9, 2025 12:45:14 AM EST Tony Liu wrote:
> Hi,
>
> I wonder which team is building Ceph RPM packages for CentOS Stream 9?
> I see Reef RPM packages in [1] and [2].
> For example, ceph-18.2.4-0.el9.x86_64.rpm in [1] while
> ceph-18.2.4-1.el9.x86_64.rpm and -2 in [2].
>
> Are th
Hi Tom,
Great talk there!
Since your cluster must be one of the largest in the world, it would be nice to
share your experience with the community as a case study [1]. The Ceph project
is looking for contributors right now.
If interested, let me know and we'll see how we can organize that.
I c
Hi,
I suggest to increase the debug level for a single OSD and then
inspect the log. Maybe there's a hint pointing to
osd_map_share_max_epochs as well. I assume that you had noout set
while the OSDs were out for a long time?
Zitat von Jorge Garcia :
Hello,
I'm going down the long and w
Hi Wesley,
during spillover (or more precisely - during "no DB space" condition, DB
spillovers tend to occur before) WAL allocations follow the same logic
as DB ones - they are served from the main device. So WAL effectively
keeps being in use but it allocates space from "slow" device.
Than
Probably this current behaviour (of not disabling the whole ceph-target)
when entering the maintenance node is not correct as the whole node is
affected (and any cluster(s) running on the same).
I'll raise this in the next cephadm weekly and see what the team thinks.
On Thu, Jan 2, 2025 at 5:22 P
Hi Tobias,
have you tried to set your privat registry before starting the upgrade
command.
~# ceph cephadm registry-login
e.g. ~# ceph cephadm registry-login harborregistry
~# ceph orch upgrade start --image harborregistry/quay.io/ceph/ceph:v18.2.4
This might also help to debug
~# ceph -
Dear Adam,
thank you very much for your reply.
In /var/log/ceph/cephadm.log i saw lots of entries like this
2025-01-08 10:00:22,045 7ff021d8c000 DEBUG
cephadm ['--image', 'harborregistry/quay.io/ceph/ceph', '--t
Hi Patrick,
thanks for your answers. We can't pin the directory above /cephfs/root as it is
the root of the ceph-fs itself, which doesn't accept any pinning. Following
your explanation and the docs, I'm also not sure what the original/intended
use-case for random pinning was/is. To me it makes
crimson-rados approved.
The failures were fixed in main and were not backported to `squid` branch.
This is acceptable as Squid is a tech preview of Crimson.
Thank you,
Matan
On Thu, Jan 9, 2025 at 12:54 AM Yuri Weinstein wrote:
> We are still missing some approvals:
>
> crimson-rados - Matan, S
13 matches
Mail list logo