There are both pros and cons to having more PGs. Here are a couple of
considerations:
Pros:
1) Better data distribution prior to balancing (and maybe after)
2) Fewer objects/data per PG
3) Lower per-PG lock contention
Cons:
1) Higher PG log memory usage until you hit the osd target unless you
Hi Anthony,
Thank you very much for your input.
It is a mixture of HDDs and a few NVMe drives. The sizes of the HDDs vary
between 8-18TB and `ceph osd df` reports 23-25 pgs for the small drives 50-55
for the bigger ones.
Considering that the cluster is working fine, what would be the benefit
If you only have one pool of significant size, then your PG ratio is around 40
. IMHO too low.
If you're using HDDs I personally might set to 8192 ; if using NVMe SSDS
arguably 16384 -- assuming that your OSD sizes are more or less close to each
other.
`ceph osd df` will show toward the righ
Hi Anthony,
I should have said, it’s replicated (3)
Best,
Nick
Sent from my phone, apologies for any typos!
From: Anthony D'Atri
Sent: Tuesday, March 5, 2024 7:22:42 PM
To: Nikolaos Dandoulakis
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Number of pgs
Th
Replicated or EC?
> On Mar 5, 2024, at 14:09, Nikolaos Dandoulakis wrote:
>
> Hi all,
>
> Pretty sure not the first time you see a thread like this.
>
> Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail
>
> The data pool is 2048 pgs big exactly the same number as