[ceph-users] Re: Somehow throotle recovery even further than basic options?

2024-09-10 Thread Frédéric Nass
Hello Istvan, Upon further reflection, I believe the script could also be utilized to facilitate the removal of nodes/OSDs, not just for adding new ones. The key point is that you shouldn't remove the OSDs and purge them immediately after running the script, as you did previously. Instead, onc

[ceph-users] Re: RFC: cephfs fallocate

2024-09-10 Thread Milind Changire
On Tue, Sep 10, 2024 at 5:36 PM Ilya Dryomov wrote: > > On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote: > > > > Problem: > > CephFS fallocate implementation does not actually reserve data blocks > > when mode is 0. > > It only truncates the file to the given size by setting the file size >

[ceph-users] User + Dev Monthly Meetup coming up on Sept. 25th!

2024-09-10 Thread Laura Flores
Hi all, The User + Dev Monthly Meetup is coming up on Sept. 25th! This month, we will have a discussion on incorporating the upmap-remapped [1] & pgremapper [2] tools with the balancer mgr module. There has been interest in the community for a while about incorporating the logic from these extern

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-10 Thread Laura Flores
Rados approved On Tue, Sep 10, 2024 at 1:43 PM Laura Flores wrote: > I have finished reviewing the upgrade and smoke suites. Most failures are > already known/tracked: > https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779 > > *Upgrade: Pending check fr

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-10 Thread Laura Flores
I have finished reviewing the upgrade and smoke suites. Most failures are already known/tracked: https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779 *Upgrade: Pending check from FS team* Two fs issues stood out. @Gregory Farnum @Venky Shankar can you c

[ceph-users] CLT meeting notes: Sep 09, 2024

2024-09-10 Thread David Orman
CLT discussion on Sep 09, 2024 19.2.0 release: * Cherry picked patch: https://github.com/ceph/ceph/pull/59492 * Approvals requested for re-runs CentOS Stream/distribution discussions ongoing * Significant implications in infrastructure for building/testing requiring ongoing discussions/work to d

[ceph-users] Multisite replication design

2024-09-10 Thread Nathan MALO
Hi, I am currently setting up a multi-site replication with rook-ceph. My setup is the following: - clusterA in regionA containing local buckets (let's call them bucketsA) - clusterB in regionB containing local buckets (let's call them bucketsB) - clusterC in regionC containing local buckets (

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-10 Thread Laura Flores
Rados reviewed here: https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779 I have asked @Radoslaw Zarzynski to take a look at the summary and confirm there are no blockers. In progress reviewing upgrade and smoke; will provide another update for those so

[ceph-users] Re: RFC: cephfs fallocate

2024-09-10 Thread Ilya Dryomov
On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote: > > Problem: > CephFS fallocate implementation does not actually reserve data blocks > when mode is 0. > It only truncates the file to the given size by setting the file size > in the inode. > So, there is no guarantee that writes to the file

[ceph-users] RFC: cephfs fallocate

2024-09-10 Thread Milind Changire
Problem: CephFS fallocate implementation does not actually reserve data blocks when mode is 0. It only truncates the file to the given size by setting the file size in the inode. So, there is no guarantee that writes to the file will succeed Solution: Since an immediate remediation of this problem

[ceph-users] Re: Grafana dashboards is missing data

2024-09-10 Thread Sake Ceph
Thank you! > Op 10-09-2024 09:39 CEST schreef Redouane Kachach : > > > Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't specify > its port that's why it's not being opened automatically. You can see that in > the cephadm logs (ports list is empty): > > 2024-09-09 04:39:48,

[ceph-users] CEPH monitor slow ops

2024-09-10 Thread Jan Marek
Hello, we have CEPH cluster with 144 NVMe "disks", "background" network is RoCE. CEPH cluster is version 18.2.2 and is installed via CEPH orchestrator cephadm, container daemon is podman. OS is Debian bookworm, podman is in the version 4.3.1+ds1-8+b1, now we are installed version 4.3.1+ds1-8+deb12

[ceph-users] Re: Grafana dashboards is missing data

2024-09-10 Thread Redouane Kachach
Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't specify its port that's why it's not being opened automatically. You can see that in the cephadm logs (ports list is empty): 2024-09-09 04:39:48,986 7fc2993d7740 DEBUG Loaded deploy configuration: {'fsid': '250b9d7c-6e65-11ef-8e0