Thanks Casey. This helped me understand the purpose of this pool. I
trimmed the usage logs which reduced the number of keys stored in that
index significantly and I may even disable the usage log entirely as I
don't believe we use it for anything.
On Fri, May 24, 2019 at 3:51 PM Casey Bodley wrot
On 5/24/19 1:15 PM, shubjero wrote:
Thanks for chiming in Konstantin!
Wouldn't setting this value to 0 disable the sharding?
Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
ob
Thanks for chiming in Konstantin!
Wouldn't setting this value to 0 disable the sharding?
Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
object, a value of zero indicates there is
in the config.
```"rgw_override_bucket_index_max_shards": "8",```. Should this be
increased?
Should be decreased to default `0`, I think.
Modern Ceph releases resolve large omaps automatically via bucket
dynamic resharding:
```
{
"option": {
"name": "rgw_dynamic_resharding",
Hi there,
We have an old cluster that was built on Giant that we have maintained and
upgraded over time and are now running Mimic 13.2.5. The other day we
received a HEALTH_WARN about 1 large omap object in the pool '.usage' which
is our usage_log_pool defined in our radosgw zone.
I am trying to