Am 16.01.25 um 16:53 schrieb Andre Tann:

    --- POOLS ---
    POOL             ID   PGS   STORED OBJECTS     USED %USED  MAX AVAIL
    .mgr              1     1  7.6 MiB 3   15 MiB  100.00        0 B
    ReplicationPool   2  1024  8.0 TiB 2.11M   24 TiB  100.00        0 B
    cephfs_data       7  1024  3.9 TiB 1.43M   12 TiB  100.00        0 B
    cephfs_metadata   8    32  268 MiB 143  804 MiB  100.00        0 B

For the records:

When moving OSDs in the crush location, I forgot to set root=default. But the crush rule wanted to take the default bucket in the first step, so it could not find OSDs anymore.


--
Andre Tann
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to