Hm, according to https://tracker.ceph.com/issues/24025 snappy compression
should be available out of the box at least since luminous. What ceph
version are you running?
On Wed, 26 Jun 2019 at 21:51, Rafał Wądołowski
wrote:
> We changed these settings. Our config now is:
>
> bluestore_rocksdb_opt
We changed these settings. Our config now is:
bluestore_rocksdb_options =
"compression=kSnappyCompression,max_write_buffer_number=16,min_write_buffer_number_to_merge=3,recycle_log_file_num=16,compaction_style=kCompactionStyleLevel,write_buffer_size=50331648,target_file_size_base=50331648,max_backg
The sizes are determined by rocksdb settings - some details can be found
here: https://tracker.ceph.com/issues/24361
One thing to note, in this thread
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030775.html
it's noted that rocksdb could use up to 100% extra space during compact
Why are you selected this specific sizes? Are there any tests/research
on it?
Best Regards,
Rafał Wądołowski
On 24.06.2019 13:05, Konstantin Shalygin wrote:
>
>> Hi
>>
>> Have been thinking a bit about rocksdb and EC pools:
>>
>> Since a RADOS object written to a EC(k+m) pool is split into seve
Hi
Have been thinking a bit about rocksdb and EC pools:
Since a RADOS object written to a EC(k+m) pool is split into several
minor pieces, then the OSD will receive many more smaller objects,
compared to the amount it would receive in a replicated setup.
This must mean that the rocksdb will als
Hi
Have been thinking a bit about rocksdb and EC pools:
Since a RADOS object written to a EC(k+m) pool is split into several
minor pieces, then the OSD will receive many more smaller objects,
compared to the amount it would receive in a replicated setup.
This must mean that the rocksdb will