Look at `ceph osd df`.  Is the balancer enabled?

> On Mar 27, 2025, at 8:50 AM, Mihai Ciubancan <mihai.ciuban...@eli-np.ro> 
> wrote:
> 
> Hello,
> 
> My name is Mihai, and I have started using CEPH this mount for a HPC cluster.
> When was lunch in the production the available space shown was 80TB now is 
> 16TB and I didn't do anything, while I'm having 12 OSD (SSD of 14TB):
> 
> sudo ceph osd tree
> ID  CLASS  WEIGHT     TYPE NAME                STATUS  REWEIGHT  PRI-AFF
> -1         167.64825  root default
> -3         167.64825      host sto-core-hpc01
> 0    ssd   13.97069          osd.0                up   1.00000  1.00000
> 1    ssd   13.97069          osd.1                up   1.00000  1.00000
> 2    ssd   13.97069          osd.2                up   1.00000  1.00000
> 3    ssd   13.97069          osd.3                up   1.00000  1.00000
> 4    ssd   13.97069          osd.4                up   1.00000  1.00000
> 5    ssd   13.97069          osd.5                up   1.00000  1.00000
> 6    ssd   13.97069          osd.6                up   1.00000  1.00000
> 7    ssd   13.97069          osd.7                up   1.00000  1.00000
> 8    ssd   13.97069          osd.8                up   1.00000  1.00000
> 9    ssd   13.97069          osd.9                up   1.00000  1.00000
> 10    ssd   13.97069          osd.10               up   1.00000  1.00000
> 11    ssd   13.97069          osd.11               up   1.00000  1.00000
> 
> sudo ceph df detail
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
> ssd    168 TiB  156 TiB  12 TiB    12 TiB       7.12
> TOTAL  168 TiB  156 TiB  12 TiB    12 TiB       7.12
> 
> --- POOLS ---
> POOL                ID  PGS   STORED   (DATA)  (OMAP)  OBJECTS     USED   
> (DATA)  (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED 
> COMPR  UNDER COMPR
> .mgr                 1    1  705 KiB  705 KiB     0 B        2  1.4 MiB  1.4 
> MiB     0 B      0    8.1 TiB            N/A          N/A    N/A         0 B  
>         0 B
> cephfs.cephfs.meta   2   16  270 MiB  270 MiB     0 B   85.96k  270 MiB  270 
> MiB     0 B      0     16 TiB            N/A          N/A    N/A         0 B  
>         0 B
> cephfs.cephfs.data   3  129   12 TiB   12 TiB     0 B    3.73M   12 TiB   12 
> TiB     0 B  42.49     16 TiB            N/A          N/A    N/A         0 B  
>         0 B
> 
> While on the client side I have this:
> 
> $ df -h
> 10.18.31.1:6789:/                   21T   13T  8.1T  61% /data
> 
> I don't know where it's gone all the space that was at the beginning.
> Someone has any hint?
> 
> Best regards,
> Mihai
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to