[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Eugen Block
I’m not sure where the doubts about old hardware and pg splits come from. We observed the opposite of what you seem to fear (increasing memory usage) after a pg split on a customer’s cluster last year. According to their Prometheus data the memory usage dropped after the split had finished.

[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Anthony D'Atri
> They seem quite even > Indeed. Assuming that your failure domain is host, that shouldn’t be a factor in stranded capacity. We mostly see that happen with say a rack failure domain cluster with 3 racks and replicated pools, or with your 6,2 pool 6 racks. Having failure domains > replicatio

[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Nicola Mori
I have a single user producing lots of small files (currently about 4.7M with a mean size of 3 MB). The total number of files is about 7M. About the occupancy: in 1.8 TiB disks I see the PG count ranging from 27 (-> 38% occupancy) to 20 (-> 27% occupancy) at the same OSD weight (1.819). I gues

[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Anthony D'Atri
> On Jan 2, 2025, at 11:18 AM, Nicola Mori wrote: > > Hi Anthony, thanks for your insights. I actually used df -h from the bash > shell of a machine mounting the CephFS with the kernel module, and here's the > current result: > > wizardfs_rootsquash@b1029256-7bb3-11ec-a8ce-ac1f6b627b45.wizar

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-02 Thread Adam King
orch approved On Fri, Dec 27, 2024 at 11:31 AM Yuri Weinstein wrote: > Hello and Happy Holidays all! > > We have merged several PRs (mostly in rgw and rbd areas) and I built a > new build 2 (rebase) > > https://tracker.ceph.com/issues/69234#note-1 > > Please provide trackers for failures so we a

[ceph-users] Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

2025-01-02 Thread Florian Haas
On 02/01/2025 16:37, Redouane Kachach wrote: Just to comment on the ceph.target. Technically in a containerized ceph a node can host daemons from *many ceph clusters* (each with its own ceph_fsid). The ceph.target is a global unit and it's the root for all the clusters running in the node. There

[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Nicola Mori
Hi Anthony, thanks for your insights. I actually used df -h from the bash shell of a machine mounting the CephFS with the kernel module, and here's the current result: wizardfs_rootsquash@b1029256-7bb3-11ec-a8ce-ac1f6b627b45.wizardfs=/ 217T 78T 139T 36% /wizard/ceph So it seems the fs si

[ceph-users] Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

2025-01-02 Thread Redouane Kachach
Just to comment on the ceph.target. Technically in a containerized ceph a node can host daemons from *many ceph clusters* (each with its own ceph_fsid). The ceph.target is a global unit and it's the root for all the clusters running in the node. There's another target which is specific to each clu

[ceph-users] Re: Understanding filesystem size

2025-01-02 Thread Anthony D'Atri
Remember that `ceph df` takes into account the full ratio reserved space, and the headroom between that threshold and the most-full OSD. Run `ceph osd df` and look at the PGs and VAR columns https://www.ibm.com/docs/en/storage-ceph/7?topic=monitoring-understanding-osd-usage-stats If you have hi

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-02 Thread Guillaume ABRIOUX
Hello Yuri, ceph-volume approved Regards, -- Guillaume Abrioux Software Engineer From: Yuri Weinstein Date: Friday, 27 December 2024 at 17:31 To: dev , ceph-users Subject: [EXTERNAL] [ceph-users] Re: squid 19.2.1 RC QE validation status Hello and Happy Holidays all! We have merged several PR