> On Mar 1, 2026, at 7:58 AM, listy via ceph-users <[email protected]> wrote:
> 
> Hi guys.
> 
> I've extended capacity sizes of my drives

How? Are they virtual drives of some sort? VMDKs?


> - then used 'bluefs-bdev-expand' to tell it to _ceph_, but it seems pools/fs 
> did not pick those up - should they not?

Please share exactly what you did.  Compare your `ceph osd df` output from 
before and after, did the SIZE and/or WEIGHT columns change?

> -> $ ceph osd df
> ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     
> AVAIL    %USE   VAR   PGS  STATUS
>  9    ssd  0.04880   1.00000  150 GiB   52 GiB   51 GiB  506 KiB 1.4 GiB   98 
> GiB  34.86  0.51   45      up
> 10    ssd  0.29300   1.00000  400 GiB  307 GiB  305 GiB  1.5 MiB 2.0 GiB   93 
> GiB  76.67  1.12  244      up
>  0    ssd  0.09769   1.00000  150 GiB  143 GiB   91 GiB  835 KiB 2.0 GiB  7.5 
> GiB  95.02  1.39   78      up
>  4    ssd  0.29300   1.00000  400 GiB  267 GiB  265 GiB  1.1 MiB 2.1 GiB  133 
> GiB  66.77  0.98  211      up
>  1    ssd  0.04880   1.00000  150 GiB   54 GiB   53 GiB  392 KiB 1.1 GiB   96 
> GiB  36.00  0.53   47      up
>  5    ssd  0.29300   1.00000  400 GiB  305 GiB  303 GiB  1.6 MiB 2.5 GiB   95 
> GiB  76.26  1.12  242      up
>                        TOTAL  1.6 TiB  1.1 TiB  1.0 TiB  5.9 MiB  11 GiB  522 
> GiB  68.34
> MIN/MAX VAR: 0.51/1.39  STDDEV: 22.41

These are extremely small OSDs, and I’ve seen reports of such filling up before 
one would think they would, perhaps due to rounding or various math not taking 
metadata into account.

Also, these are very imbalanced, which doesn’t help.

Please share `ceph balancer status` and `ceph osd tree`.

Ensure that you have this set, which may help:

mgr    advanced  mgr/balancer/upmap_max_deviation           1


> -> $ rados df
> POOL_NAME              USED  OBJECTS  CLONES  COPIES MISSING_ON_PRIMARY  
> UNFOUND  DEGRADED      RD_OPS       RD WR_OPS       WR  USED COMPR  UNDER 
> COMPR
> .mgr                1.3 MiB        2       0       6    0        0         0  
>      13951   35 MiB        5271   54 MiB      0 B          0 B
> cephfs.APKI.data    329 GiB    31352       0   94056    0        0         0  
>  153508700  219 TiB   136896977  2.4 TiB      0 B          0 B
> cephfs.APKI.meta    723 MiB     8513       0   25539    0        0         0  
>     267014   11 GiB    17205610   69 GiB      0 B          0 B
> cephfs.MONERO.data  736 GiB    62784       0  188352    0        0         0  
> 6553814758   29 TiB  1776390595  8.7 TiB      0 B          0 B
> cephfs.MONERO.meta  776 MiB       92       0     276    0        0         0  
>      21866   22 GiB     4219726   17 GiB      0 B          0 B
> 
> total_objects    102743
> total_used       1.1 TiB
> total_avail      522 GiB
> total_space      1.6 TiB
> 

And what do you mean exactly by "it seems pools/fs did not pick those up “?  I 
suspect you mean in terms of MAX AVAIL. “Ceph” picks up capacity if it shows in 
the RAW STORAGE numbers from `ceph df`.  We don’t have the before output, so we 
can’t tell.

Assuming this is what you mean, this is a commonly misundertood nuance.  MAX 
AVAIL is a function of the delta between how full the MOST full OSD of the 
pool’s device class is vs the fullness ratios.

# ceph osd dump | grep ratio
full_ratio 0.95
backfillfull_ratio 0.90
nearfull_ratio 0.85

Your numbers may look different from these, which I think are the defaults.

Notice your osd.0 is 95% full.  I should think that `ceph status`, which you 
didn’t include in your message, has clues.

So for your immediate question, I think if you fix your balancing issue you’ll 
see a bit of space show under MAX AVAIL.


> -> $ ceph df
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
> ssd    1.6 TiB  522 GiB  1.1 TiB   1.1 TiB      68.34
> TOTAL  1.6 TiB  522 GiB  1.1 TiB   1.1 TiB      68.34
> 
> --- POOLS ---
> POOL                ID  PGS   STORED  OBJECTS     USED   %USED  MAX AVAIL
> .mgr                 1    1  449 KiB        2  1.3 MiB  100.00   0 B
> cephfs.APKI.meta     2   16  241 MiB    8.51k  723 MiB  100.00   0 B
> cephfs.APKI.data     3  128  110 GiB   31.35k  329 GiB  100.00   0 B
> cephfs.MONERO.meta   4   16  259 MiB       92  776 MiB  100.00   0 B
> cephfs.MONERO.data   5  128  245 GiB   62.78k  736 GiB  100.00   0 B



> 
> I do not use quotas, no subvols.
> I must be missing something obvious - right? - how to tell ceph to use that 
> newly added capacity?
> many thanks, L.
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to