Hi Jaemin,

The META column figures refer to the size allocated by BlueFS minus the OMAP 
size. It varies over time based on the workload each OSD receives and gets 
eventually reduced during compactions (ceph tell osd.x compact).

The reason osd.27 and osd.29 have been using more space than other OSDs during 
your test likely results from your testing environment and protocol. For 
example, if you created a bucket with a small number of shards and your index 
pools used a small number of PGs, there's a fair chance that some OSDs were 
more solicited than others due to poor metadata load distribution during your 
testing.

Here's what you could try:

1/ Increase the number of PGs each RGW pool has
2/ Recreate the bucket with more shards (right from the start)
3/ Compact all OSDs
4/ Run your test again

This may produce different figures.

Regards,
Frédéric.

----- Le 14 Avr 25, à 12:57, Jaemin Joo jm7....@gmail.com a écrit :

> Hi all,
> 
> I am testing rgw cluster which separated between index pool osd and data
> pool osd.
> After uploading a lot of objects, I found that index pool osd usage is
> unbalanced. I just know that index pool use rocksdb. rocksdb has object
> metadata, bucket metadata, multipart, versioning data for index pool.
> I thought that most of usage is object metadata. so I checked balance state
> of object metadata between osds of index pool. it's balanced well. (I know
> it though OMAP stat).
> Which part can make unbalance between index pool disks?
> 
> (you can check osd.27, osd.29 which are bigger than other osds)
> ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META
> AVAIL    %USE   VAR   PGS  STATUS
> 24    ssd  3.49309   1.00000  3.5 TiB  1.8 TiB  399 MiB  1.2 TiB   581 GiB
> 1.7 TiB  51.72  0.85   71      up
> 25    ssd  3.49309   1.00000  3.5 TiB  2.2 TiB  179 MiB  1.4 TiB   836 GiB
> 1.3 TiB  62.22  1.02   71      up
> 26    ssd  3.49309   1.00000  3.5 TiB  2.0 TiB  180 MiB  1.2 TiB   752 GiB
> 1.5 TiB  56.55  0.93   72      up
> *27    ssd  3.49309   1.00000  3.5 TiB  2.7 TiB  399 MiB  1.3 TiB   1.4 TiB
> 815 GiB  77.22  1.27   70      up*
> 28    ssd  3.49309   1.00000  3.5 TiB  1.5 TiB  179 MiB  1.2 TiB   321 GiB
> 2.0 TiB  43.57  0.71   72      up
> *29    ssd  3.49309   1.00000  3.5 TiB  2.8 TiB  179 MiB  1.4 TiB   1.4 TiB
> 748 GiB  79.08  1.30   73      up*
> 30    ssd  3.49309   1.00000  3.5 TiB  1.7 TiB  179 MiB  1.4 TiB   342 GiB
> 1.8 TiB  49.37  0.81   75      up
> 31    ssd  3.49309   1.00000  3.5 TiB  2.5 TiB  179 MiB  1.4 TiB   1.2 TiB
> 969 GiB  72.90  1.20   69      up
> 32    ssd  3.49309   1.00000  3.5 TiB  2.4 TiB  179 MiB  1.3 TiB   1.1 TiB
> 1.1 TiB  67.46  1.11   66      up
> 33    ssd  3.49309   1.00000  3.5 TiB  2.1 TiB  179 MiB  1.2 TiB  1015 GiB
> 1.3 TiB  61.40  1.01   68      up
> .... omission ....
>                       TOTAL   63 TiB   38 TiB  3.8 GiB   23 TiB    16 TiB
>  25 TiB  60.96
> MIN/MAX VAR: 0.68/1.30  STDDEV: 9.98
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to