The current OSD's size and pg_num? Are you using the different size OSDs?
On 6/9/2020 上午1:34, huxia...@horebdata.cn wrote:
Dear Ceph folks,
As the capacity of one HDD (OSD) is growing bigger and bigger, e.g. from 6TB up
to 18TB or even more, should the number of PG per OSD increase as well, e.
I think there are multiple variables there.
My advice is for HDDs to aim for an average of 150-200 as I wrote before. The
limitation is the speed of the device, throw a thousand PGs on there and you
won’t get any more out of it, you’ll just have more peering and more RAM used.
NVMe is a diffe
In our unofficial testing, under heavy random 4KB write workloads, with large
PGs, we observed large latency such as 100ms or above. On the other hand, when
peering at the source code, it seems that PG lock could impact performance if
the capacity of PGs grows bigger
That is why i am wondering
Is there any rules for computing RAM requeirements in terms of the number of
PGs?
Just curious abount what is the fundamental limitations on the number of PGs
per OSD for bigger capacity HDD
best regards,
Samuel
huxia...@horebdata.cn
From: Anthony D'Atri
Date: 2020-09-05 20:00
To: huxia..
Good question!
Did you already observe some performance impact of very large PGs?
Which PG locks are you speaking of? Is there perhaps some way to
improve this with the op queue shards?
(I'm cc'ing Mark in case this is something that the performance team
has already looked into).
With a 20TB osd,
One factor is RAM usage, that was IIRC the motivation for the lowering of the
recommendation of the ratio from 200 to 100. Memory needs also increase during
recovery and backfill.
When calculating, be sure to consider repllicas.
ratio = (pgp_num x replication) / num_osds
As HDDs grow the inte