Hi Andrei,

The most obvious reason is space usage overhead caused by BlueStore allocation granularity, e.g. if bluestore_min_alloc_size is 64K  and average object size is 16K one will waste 48K per object in average. This is rather a speculation so far as we lack key the information about your cluster:

- Ceph version

- What are the main devices for OSD: hdd or ssd.

- BlueStore or FileStore.

- average RGW object size.

You might also want to collect and share performance counter dumps (ceph daemon osd.N perf dump) and "ceph osd df tree" reports from a couple of your OSDs.


Thanks,

Igor


On 7/2/2019 11:43 AM, Andrei Mikhailovsky wrote:
Bump!


------------------------------------------------------------------------

    *From: *"Andrei Mikhailovsky" <and...@arhont.com>
    *To: *"ceph-users" <ceph-users@lists.ceph.com>
    *Sent: *Friday, 28 June, 2019 14:54:53
    *Subject: *[ceph-users] troubleshooting space usage

    Hi

    Could someone please explain / show how to troubleshoot the space
    usage in Ceph and how to reclaim the unused space?

    I have a small cluster with 40 osds, replica of 2, mainly used as
    a backend for cloud stack as well as the S3 gateway. The used
    space doesn't make any sense to me, especially the rgw pool, so I
    am seeking help.

    Here is what I found from the client:

    Ceph -s shows the

     usage:   89 TiB used, 24 TiB / 113 TiB avail

    Ceph df shows:

    Primary-ubuntu-1               5       27 TiB 90.11       3.0 TiB
        7201098
    Primary-ubuntu-1-ssd           57     1.2 TiB 89.62       143 GiB
         359260
    .rgw.buckets         19      15 TiB     83.73       3.0 TiB 8742222

    the usage of the Primary-ubuntu-1 and Primary-ubuntu-1-ssd is in
    line with my expectations. However, the .rgw.buckets pool seems to
    be using way too much. The usage of all rgw buckets shows 6.5TB
    usage (looking at the size_kb values from the "radosgw-admin
    bucket stats"). I am trying to figure out why .rgw.buckets is
    using 15TB of space instead of the 6.5TB as shown from the bucket
    usage.

    Thanks

    Andrei

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to