Igor,

You're a bloomin' genius, as they say.

Disabling auto compaction allowed OSDs 11 and 12 to spin up/out. The 7 down
PGs recovered; there were a few unfound items previously which I went ahead
and deleted, given that this is EC, revert not being an option.

HEALTH OK :)

I'm now intending to re-enable auto compaction. Should I also fsck the rest
of the OSDs, or is the typical scrub/deep scrub cycle sufficient? (No PGs
are behind on scrubbing, whereas they were during the degraded period.)

Time will tell if I actually lost data, I guess.

On Mon, Dec 21, 2020 at 8:37 AM Igor Fedotov <ifedo...@suse.de> wrote:

> Hi Jeremy,
>
> you might want to try RocksDB's disable_auto_compactions option for that.
>
> To adjust rocksdb's options one should  edit/insert
> bluestore_rocksdb_options in ceph.conf.
>
> E.g.
>
> bluestore_rocksdb_options =
> "disable_auto_compactions=true,compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2,max_total_wal_size=1073741824"
>
>
> Please note bluestore specific defaults for rocksdb settings are
> re-provided to make sure they aren't reset to rocksdb's ones.
>
> Hope this helps.
>
> Thanks,
>
> Igor
>
>
>
>
>
>
>
> On 12/21/2020 2:56 AM, Jeremy Austin wrote:
>
>
>
> On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin <jhaus...@gmail.com> wrote:
>
>> Will attempt to disable compaction and report.
>>
>
> Not sure I'm doing this right. In [osd] section of ceph.conf, I added
> periodic_compaction_seconds=0
>
> and attempted to start the OSDs in question. Same error as before. Am I
> setting compaction options correctly?
>
>
> --
> Jeremy Austin
> jhaus...@gmail.com
>
>

-- 
Jeremy Austin
jhaus...@gmail.com
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to