Hi Simon and Janne,
Thanks for the reply.
It seems indeed related to the bluestore_min_alloc_size.
In an old thread I've also found the following:
*S3 object saving pipeline:*
*- S3 object is divided into multipart shards by client.*
*- Rgw shards each multipart shard into rados objects of siz
Kristof Coucke writes:
> I have an issue on my Ceph cluster.
> For one of my pools I have 107TiB STORED and 298TiB USED.
> This is strange, since I've configured erasure coding (6 data chunks, 3
> coding chunks).
> So, in an ideal world this should result in approx. 160.5TiB USED.
> The question n
Den ons 12 feb. 2020 kl 12:58 skrev Kristof Coucke :
> For one of my pools I have 107TiB STORED and 298TiB USED.
> This is strange, since I've configured erasure coding (6 data chunks, 3
> coding chunks).
> So, in an ideal world this should result in approx. 160.5TiB USED.
>
> There are 473+M obje
I just found an interesting thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html
I assume this is the case I’m dealing with.
The question is, can I safely adapt the parameter
bluestore_min_alloc_size_hdd and how will the system react? Is this
backwards compat