If sharding is not option at all, then you can increase
osd_deep_scrub_large_omap_object_key threshold which is not the best idea.
I would still go with resharding which might result in taking offline at
least slave sites. In the future you can set the higher number of shards
during initial creatio
Hello,
I agree with that point. When ceph creates lvm volumes it adds lvm tags to
them. Thats how ceph finds that those they are occupied by ceph. So you
should remove lvm volumes and even better clean all data on those lvm
volumes. Usually its enough to clean just the head of lvm partition where
There should not be any issues using rgw for other buckets while
re-sharding.
As for doubling number of objects after reshard is an interesting
situation. After the manual reshard is done, there might be leftover from
the old bucket index. As during reshard new .dir.new_bucket_index objects
are cr
No, you can not do that. Because RocksDB for omap key/values and WAL would
be gone meaning all xattr and omap will be gone too. Hence osd will become
non operational. But if you notice that ssd starts throwing errors, you can
start migrating bluefs device to a new partition:
ceph-bluestore-tool bl
an the omap key/values be regenerated?
> I always thought these data would be stored in the rgw pools. Or am I
> mixing things up and the bluestore metadata got omap k/v? And then there is
> the omap k/v from rgw objects?
>
>
> Am 10.11.2021 um 22:37 schrieb Сергей Процун :
>
&g
No, you can not do online compaction.
пт, 5 лист. 2021, 17:22 користувач Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com> пише:
> Seems like it can help, but after 1-2 days it comes back on different and
> in some cases on the same osd as well.
> Is there any other way to compact online as it comp
onths).
>
>
> Am Mi., 10. Nov. 2021 um 23:33 Uhr schrieb Сергей Процун <
> proserge...@gmail.com>:
>
>> In rgw.meta contains user, bucket, bucket instance metadata.
>>
>> rgw.bucket.index contains bucket indexes aka shards. Like if you have 32
>> shards
OSD will probably not sart if wal device is lost. You can give a try by
removing the corresponding link to the block device from
/var/lib/ceph/osd/ceph-ID/block.wal. Or it will use block.db for wal in
that case.
IOPS should be counted as well. I would go 1:3 way if we are considering
IOPS. But its