[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Kristof Coucke
Hi Simon and Janne, Thanks for the reply. It seems indeed related to the bluestore_min_alloc_size. In an old thread I've also found the following: *S3 object saving pipeline:* *- S3 object is divided into multipart shards by client.* *- Rgw shards each multipart shard into rados objects of siz

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Simon Leinen
Kristof Coucke writes: > I have an issue on my Ceph cluster. > For one of my pools I have 107TiB STORED and 298TiB USED. > This is strange, since I've configured erasure coding (6 data chunks, 3 > coding chunks). > So, in an ideal world this should result in approx. 160.5TiB USED. > The question n

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Janne Johansson
Den ons 12 feb. 2020 kl 12:58 skrev Kristof Coucke : > For one of my pools I have 107TiB STORED and 298TiB USED. > This is strange, since I've configured erasure coding (6 data chunks, 3 > coding chunks). > So, in an ideal world this should result in approx. 160.5TiB USED. > > There are 473+M obje

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Kristof Coucke
I just found an interesting thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html I assume this is the case I’m dealing with. The question is, can I safely adapt the parameter bluestore_min_alloc_size_hdd and how will the system react? Is this backwards compat