--
> shadow_lin
> --------------
>
> *发件人:*David Turner
> *发送时间:*2018-03-09 06:45
> *主题:*Re: [ceph-users] Uneven pg distribution cause high fs_apply_latency
> on osds with more pgs
> *收件人:*"shadow_lin"
> *抄送:*"ceph-users"
>
&g
exception table in the osdmap in luminous 12.2.x. It is said to
use this it is possible to achive perfect pg distribution among osds.
2018-03-09
shadow_lin
发件人:David Turner
发送时间:2018-03-09 06:45
主题:Re: [ceph-users] Uneven pg distribution cause high fs_apply_latency on osds
with more pgs
收件人
PGs being unevenly distributed is a common occurrence in Ceph. Luminous
started making some steps towards correcting this, but you're in Jewel.
There are a lot of threads in the ML archives about fixing PG
distribution. Generally every method comes down to increasing the weight
on OSDs with too f
Hi list,
Ceph version is jewel 10.2.10 and all osd are using filestore.
The Cluster has 96 osds and 1 pool with size=2 replication with 4096 pg(base on
pg calculate method from ceph doc for 100pg/per osd).
The osd with the most pg count has 104 PGs and there are 6 osds have above 100
PGs
M