On 2/4/20 2:00 PM, German Anders wrote:
> Hello Everyone,
>
> I would like to understand if this output is right:
>
> *# ceph df*
> GLOBAL:
> SIZEAVAIL RAW USED %RAW USED
> 85.1TiB 43.7TiB 41.4TiB 48.68
> POOLS:
> NAMEID USED%U
Manuel, find the output of ceph osd df tree command:
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME
-7 84.00099- 85.1TiB 41.6TiB 43.6TiB 48.82 1.00 - root root
-5 12.0- 13.1TiB 5.81TiB 7.29TiB 44.38 0.91 - r
With “ceph osd df tree” will be clear but right now I can see that some %USE
osd between 44% and 65%.
Ceph osd df tree give also the balance at host level.
Do you have balancer enabled ?No “perfect” distribution cause that you cant use
the full space.
In our case we gain space manually rebalan
Hi Manuel,
Sure thing:
# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
0 nvme 1.0 1.0 1.09TiB 496GiB 622GiB 44.35 0.91 143
1 nvme 1.0 1.0 1.09TiB 488GiB 630GiB 43.63 0.89 141
2 nvme 1.0 1.0 1.09TiB 537GiB 581GiB 48.05 0.99 155
Hi German,
Can you post , ceph osd df tree ?
Looks like your usage distribution is not perfect and that's why you got less
space than real.
Regards
-Mensaje original-
De: German Anders
Enviado el: martes, 4 de febrero de 2020 14:00
Para: ceph-us...@ceph.com
Asunto: [ceph-users] Doubt