Hi Andras,
I think you're missing one (at least!) more important aspect in your
calculations which is writing block size. BlueStore compresses each
writing block independently. As Ceph object size in your case is
presumably 4MiB which (as far as I understand EC functioning) is split
into 9 pa
Hi Igor,
Thanks for the insightful details on how to interpret the compression
data. I'm still a bit confused about why compression doesn't work
better in my case, so I've decided to try a test. I created 16 GiB
cephfs file which is just a repeat of 4 characters 'abcd' essentially 4
billion
Hi Andras,
please find my answers inline.
On 2/15/2020 12:27 AM, Andras Pataki wrote:
We're considering using bluestore compression for some of our data,
and I'm not entirely sure how to interpret compression results. As an
example, one of the osd perf dump results shows:
"bluestore