Release notes 
https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/#highlights say:

> BlueStore RocksDB LZ4 compression is now enabled by default to improve 
> average performance and "fast device" space usage.

On one EC pool with many files I have such warnings:

    osd.6 spilled over 8.0 GiB metadata from 'db' device (36 GiB used of 60 
GiB) to slow device

and hope that the LZ4 compression might help with that.

What do I need to do to "migrate" my OSDs to that?
I already upgrade the cluster to Ceph 19.

Running

    grep 'Options.compression: ' /var/log/ceph/ceph-osd.6.log

already shows the switch from

    rocksdb:          Options.compression: NoCompression

to

    rocksdb:          Options.compression: LZ4

in the logs.

But does restarting the OSDs already automtaically compress everything with 
LZ4, or do I need to run some operation such as

    ceph daemon osd.6 compact

or similar to make sure that existing data is actually compressed?

Also, could anybody clarify whether this setting is `compression_algorithm` from
https://docs.ceph.com/en/squid/rados/operations/pools/#setting-pool-values
https://docs.ceph.com/en/squid/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm
or if that's something different (e.g. if that's "actual data" instead of 
"metadata")?

I suspect it's different, because `ceph config show-with-defaults osd.6 | grep 
compression` reveals:

    bluestore_compression_algorithm                             snappy
    bluestore_compression_mode                                  none
    bluestore_rocksdb_options                                   
compression=kLZ4Compression,...

>From this it looks like `bluestore_compression*` is the "Inline compression" 
>for data 
>(https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#inline-compression),
> and `bluestore_rocksdb_options` is what the "RocksDB compression" is about.

Still the question remains on how to bring all existing data over to be 
compressed.

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to