Dear Cephalopodians,

as part of our stress test with 100,000,000 objects (all small files) we ended 
up with
the following usage on the OSDs on which the metadata pool lives:
# ceph osd df | head
ID  CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE  VAR  PGS 
[...]
  2   ssd 0.21819  1.00000  223G 79649M  145G 34.81 6.62 128 
  3   ssd 0.21819  1.00000  223G 79697M  145G 34.83 6.63 128

The cephfs-data cluster is mostly empty (5 % usage), but contains 100,000,000 
small objects. 

Looking with:
ceph daemon osd.2 perf dump
I get:
    "bluefs": {
        "gift_bytes": 0,
        "reclaim_bytes": 0,
        "db_total_bytes": 84760592384,
        "db_used_bytes": 78920024064,
        "wal_total_bytes": 0,
        "wal_used_bytes": 0,
        "slow_total_bytes": 0,
        "slow_used_bytes": 0,
so it seems this is almost exclusively RocksDB usage. 

Is this expected? 
Is there a recommendation on how much MDS storage is needed for a CephFS with 
450 TB? 

Cheers,
        Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to