> Can we please have 1 command, that can dump all config

Sounds like an opportunity to enter a tracker issue or a PR.

Done: https://tracker.ceph.com/issues/71783

What model are they that they’re that small?  Are they enterprise-quality?  
OSDs that small can present difficulties.
They are ~1 TiB enterprise SSDs:
Samsung PM983, 960GB (MZQLB960HAJR-00007)
Our cluster currently has no need for SSD storage (only metadata which is only 
7 %USED).
Most of the SSD's space is used for the DB/WAL for the HDD SSDs.

For a machine with 10x 16TiB HDD + 2x 960GiB SSD:

* Each SSD carries DB/WAL for 5 HDDs, so 5 * 60 GiB each.
* 170 GB SSD for SSD OSDs.
* The rest of the SSD for the OS disk.
How large are those DB+WAL slices?  Please share BlueFS stats:

`ceph health detail` example (osd.2 is one such 16 TiB HDD):

     osd.2 spilled over 5.2 GiB metadata from 'db' device (47 GiB used of 60 
GiB) to slow device

ceph daemon osd.2 bluefs stats

    1 : device size 0xee5afe000 : using 0xbdc800000(47 GiB)
    2 : device size 0xe8d7ed00000 : using 0xc81f80fb000(13 TiB)
    RocksDBBlueFSVolumeSelector Usage Matrix:
    DEV/LEV     WAL         DB          SLOW        *           *           
REAL        FILES
    LOG         0 B         14 MiB      0 B         0 B         0 B         12 
MiB      1
    WAL         0 B         915 MiB     0 B         0 B         0 B         502 
MiB     35
    DB          0 B         9.7 GiB     0 B         0 B         0 B         6.4 
GiB     104
    SLOW        0 B         37 GiB      5.2 GiB     0 B         0 B         37 
GiB      607
    TOTAL       0 B         47 GiB      5.2 GiB     0 B         0 B         0 B 
        747
    MAXIMUMS:
    LOG         0 B         22 MiB      0 B         0 B         0 B         18 
MiB
    WAL         0 B         1.8 GiB     0 B         0 B         0 B         1.0 
GiB
    DB          0 B         11 GiB      0 B         0 B         0 B         6.7 
GiB
    SLOW        0 B         39 GiB      8.4 GiB     0 B         0 B         42 
GiB
    TOTAL       0 B         50 GiB      8.4 GiB     0 B         0 B         0 B
    >> SIZE <<  0 B         57 GiB      14 TiB

Oddly, not listed> I look forward to your PR.
Done: https://github.com/ceph/ceph/pull/64074

I suspect that once backfill completes you’ll see a ratio > 33
Shouldn't that have been the case already before I added the new machines, 
given that the PG count didn't change?

Apologies, I meant mon_target_pg_per_osd = 250
That makes sense, thanks!

`upmap_max_deviation`
Done: https://github.com/ceph/ceph/pull/64075
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to