[ceph-users] Re: Can I delete cluster_network?

2025-01-09 Thread Dan van der Ster
Hi, cluster_network and public_network are read when a daemon starts up in order to decide which interface to bind to, and which IP address / port to advertise to the rest of the cluster. So you can normally modify any of those, as long as all the daemons can still reach each other on the new IPs

[ceph-users] Can I delete cluster_network?

2025-01-09 Thread ??????????
   Hello, I have installed a ceph cluster, version v17.2.8, the cluster has public_network and cluster_network. For some other reason, I want to temporarily remove cluster_network and let the service data, heartbeat detection, and data recovery be done on public_network. Is this achievable? Aft

[ceph-users] Re: Slow initial boot of OSDs in large cluster with unclean state

2025-01-09 Thread Joshua Baergen
> I'm wondering about the influence of WAL/DBs collocated on HDDs on OSD > creation time, OSD startup time, peering and osdmap updates, and the role it > might play regarding flapping, when DB IOs compete with client IOs, even with > 100% active+clean PGs. FWIW, having encountered these long-st

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-09 Thread Adam Emerson
On 07/01/2025, Adam Emerson wrote: > On 16/12/2024, Yuri Weinstein wrote: > > rgw - Eric, Adam E > > Approved for RGW. Failures were in tests and we've got fixes for those now. I apologize, but I am going to have to block for a critical fix. I will try to have it up and in today or tomorrow.

[ceph-users] Re: who build RPM package

2025-01-09 Thread John Mulligan
On Thursday, January 9, 2025 12:45:14 AM EST Tony Liu wrote: > Hi, > > I wonder which team is building Ceph RPM packages for CentOS Stream 9? > I see Reef RPM packages in [1] and [2]. > For example, ceph-18.2.4-0.el9.x86_64.rpm in [1] while > ceph-18.2.4-1.el9.x86_64.rpm and -2 in [2]. > > Are th

[ceph-users] Re: Slow initial boot of OSDs in large cluster with unclean state

2025-01-09 Thread Frédéric Nass
Hi Tom, Great talk there! Since your cluster must be one of the largest in the world, it would be nice to share your experience with the community as a case study [1]. The Ceph project is looking for contributors right now. If interested, let me know and we'll see how we can organize that. I c

[ceph-users] Re: OSDs won't come back after upgrade

2025-01-09 Thread Eugen Block
Hi, I suggest to increase the debug level for a single OSD and then inspect the log. Maybe there's a hint pointing to osd_map_share_max_epochs as well. I assume that you had noout set while the OSDs were out for a long time? Zitat von Jorge Garcia : Hello, I'm going down the long and w

[ceph-users] Re: Protection of WAL during spillover on implicitly colocated db/wal devices

2025-01-09 Thread Igor Fedotov
Hi Wesley, during spillover (or more precisely - during "no DB space" condition, DB spillovers tend to occur before) WAL allocations follow the same logic as DB ones - they are served from the main device. So WAL effectively keeps being in use but it allocates space from "slow" device. Than

[ceph-users] Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

2025-01-09 Thread Redouane Kachach
Probably this current behaviour (of not disabling the whole ceph-target) when entering the maintenance node is not correct as the whole node is affected (and any cluster(s) running on the same). I'll raise this in the next cephadm weekly and see what the team thinks. On Thu, Jan 2, 2025 at 5:22 P

[ceph-users] Re: ceph orch upgrade tries to pull latest?

2025-01-09 Thread Stephan Hohn
Hi Tobias, have you tried to set your privat registry before starting the upgrade command. ~# ceph cephadm registry-login e.g. ~# ceph cephadm registry-login harborregistry ~# ceph orch upgrade start --image harborregistry/quay.io/ceph/ceph:v18.2.4 This might also help to debug ~# ceph -

[ceph-users] Re: ceph orch upgrade tries to pull latest?

2025-01-09 Thread tobias tempel
Dear Adam, thank you very much for your reply. In /var/log/ceph/cephadm.log i saw lots of entries like this 2025-01-08 10:00:22,045 7ff021d8c000 DEBUG cephadm ['--image', 'harborregistry/quay.io/ceph/ceph', '--t

[ceph-users] Re: Random ephemeral pinning, what happens to sub-tree under pin root dir

2025-01-09 Thread Frank Schilder
Hi Patrick, thanks for your answers. We can't pin the directory above /cephfs/root as it is the root of the ceph-fs itself, which doesn't accept any pinning. Following your explanation and the docs, I'm also not sure what the original/intended use-case for random pinning was/is. To me it makes

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-09 Thread Matan Breizman
crimson-rados approved. The failures were fixed in main and were not backported to `squid` branch. This is acceptable as Squid is a tech preview of Crimson. Thank you, Matan On Thu, Jan 9, 2025 at 12:54 AM Yuri Weinstein wrote: > We are still missing some approvals: > > crimson-rados - Matan, S