I’m not sure where the doubts about old hardware and pg splits come
from. We observed the opposite of what you seem to fear (increasing
memory usage) after a pg split on a customer’s cluster last year.
According to their Prometheus data the memory usage dropped after the
split had finished.
> They seem quite even
>
Indeed. Assuming that your failure domain is host, that shouldn’t be a factor
in stranded capacity. We mostly see that happen with say a rack failure domain
cluster with 3 racks and replicated pools, or with your 6,2 pool 6 racks.
Having failure domains > replicatio
I have a single user producing lots of small files (currently about 4.7M
with a mean size of 3 MB). The total number of files is about 7M.
About the occupancy: in 1.8 TiB disks I see the PG count ranging from 27
(-> 38% occupancy) to 20 (-> 27% occupancy) at the same OSD weight
(1.819). I gues
> On Jan 2, 2025, at 11:18 AM, Nicola Mori wrote:
>
> Hi Anthony, thanks for your insights. I actually used df -h from the bash
> shell of a machine mounting the CephFS with the kernel module, and here's the
> current result:
>
> wizardfs_rootsquash@b1029256-7bb3-11ec-a8ce-ac1f6b627b45.wizar
orch approved
On Fri, Dec 27, 2024 at 11:31 AM Yuri Weinstein wrote:
> Hello and Happy Holidays all!
>
> We have merged several PRs (mostly in rgw and rbd areas) and I built a
> new build 2 (rebase)
>
> https://tracker.ceph.com/issues/69234#note-1
>
> Please provide trackers for failures so we a
On 02/01/2025 16:37, Redouane Kachach wrote:
Just to comment on the ceph.target. Technically in a containerized ceph a
node can host daemons from *many ceph clusters* (each with its own
ceph_fsid).
The ceph.target is a global unit and it's the root for all the clusters
running in the node. There
Hi Anthony, thanks for your insights. I actually used df -h from the
bash shell of a machine mounting the CephFS with the kernel module, and
here's the current result:
wizardfs_rootsquash@b1029256-7bb3-11ec-a8ce-ac1f6b627b45.wizardfs=/
217T 78T 139T 36% /wizard/ceph
So it seems the fs si
Just to comment on the ceph.target. Technically in a containerized ceph a
node can host daemons from *many ceph clusters* (each with its own
ceph_fsid).
The ceph.target is a global unit and it's the root for all the clusters
running in the node. There's another target which is specific to
each clu
Remember that `ceph df` takes into account the full ratio reserved space, and
the headroom between that threshold and the most-full OSD.
Run `ceph osd df` and look at the PGs and VAR columns
https://www.ibm.com/docs/en/storage-ceph/7?topic=monitoring-understanding-osd-usage-stats
If you have hi
Hello Yuri,
ceph-volume approved
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Friday, 27 December 2024 at 17:31
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] Re: squid 19.2.1 RC QE validation status
Hello and Happy Holidays all!
We have merged several PR
10 matches
Mail list logo