This was after a while (I did notice that the number of objects went higher before it went lower). It is actually reporting more objects now. I'm not sure if some co-worker or program is writing to the filesystem... It got to these numbers and hasn't changed for the past couple hours.

# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    392 TiB     391 TiB      1.3 TiB          0.33
POOLS:
    NAME            ID     USED        %USED     MAX AVAIL OBJECTS
    cephfs-meta  6       27 MiB         0       124 TiB 29
    cephfs-data   7      100 GiB      0.08       124 TiB 25600
    new-ec-pool  8      641 GiB      0.25       247 TiB 163991

On 6/28/19 4:04 PM, Patrick Hein wrote:
Afaik MDS doesn't delete the objects immediately but defer it for later. If you check that again now, how many objects does it report?

Jorge Garcia <jgar...@soe.ucsc.edu <mailto:jgar...@soe.ucsc.edu>> schrieb am Fr., 28. Juni 2019, 23:16:


    On 6/28/19 9:02 AM, Marc Roos wrote:
    > 3. When everything is copied-removed, you should end up with an
    empty
    > datapool with zero objects.

    I copied the data to a new directory and then removed the data
    from the
    old directory, but df still reports some objects in the old pool (not
    zero). Is there a way to track down what's still in the old pool, and
    how to delete it?

    Before delete:

    # ceph df
    GLOBAL:
         SIZE        AVAIL       RAW USED     %RAW USED
         392 TiB     389 TiB      3.3 TiB          0.83
    POOLS:
         NAME            ID     USED        %USED     MAX AVAIL OBJECTS
         cephfs-meta  6       17 MiB         0       123 TiB 27
         cephfs-data   7      763 GiB      0.60       123 TiB 195233
         new-ec-pool  8      641 GiB      0.25       245 TiB 163991

    After delete:

    # ceph df
    GLOBAL:
         SIZE        AVAIL       RAW USED     %RAW USED
         392 TiB     391 TiB      1.2 TiB          0.32
    POOLS:
         NAME            ID     USED        %USED     MAX AVAIL OBJECTS
         cephfs-meta  6       26 MiB         0       124 TiB 29
         cephfs-data   7       83 GiB      0.07       124 TiB 21175
         new-ec-pool  8      641 GiB      0.25       247 TiB 163991

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to