Hello.

I've created an rgw installation, had uploaded about 60M files into a single bucket. Removal had looked as a long adventure, so I "ceph osd pool rm'ed" both default.rgw.data and default.rgw.index.

Now I have this:

# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

(same as for ceph osd pool ls)

but ceph -s shows:

    pools:   6 pools, 256 pgs

Moreover, ceph osd df shows that I have  TOTAL (raw) 5.5 TiB (use) 3.6 TiB (data) 3.4 TiB  (omap) 35 GiB (meta) 86 GiB (avail) 1.9 TiB (%use) 65.36

I tried to force deepscrub for all OSDs but this didn't helped.

Currently I have a few tiny bits in all other pools and I don't understand where the space is.

Installation is fresh nautilus, bluestore over HDD.


Few questions:

1. How this space is called? Lost? Non-gc? Cached?

2. Is it normal to have different number is lspools and total number of pools?

3. Where I can continue to debug this?

4. (of course) how to this this?

Thanks!

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to