.
-Brent
-Original Message-
From: Matt Benjamin [mailto:mbenj...@redhat.com]
Sent: Saturday, September 1, 2018 2:56 PM
To: Brent Kennedy
Cc: Will Marley ; ceph-users
Subject: Re: [ceph-users] OMAP warning ( again )
Apparently it is the case presently that when dynamic resharding completes
nes ).
>
> We need defined remediation steps for this. I was thinking of hitting up
> the IRC room since RGW folks don’t seem to be around :(
>
> -Brent
>
> -Original Message-
> From: Will Marley [mailto:will.mar...@ukfast.co.uk]
> Sent: Friday, August 3
fast.co.uk]
Sent: Friday, August 31, 2018 6:08 AM
To: Brent Kennedy
Subject: RE: [ceph-users] OMAP warning ( again )
Hi Brent,
We're currently facing a similar issue. Did a manual reshard repair this for
you? Or do you have any more information to hand regarding a solution with
this? We
> "swift_versioning": "false",
> "swift_ver_location": "",
> "index_type": 0,
> "mdsearch_config": [],
> "reshard_status": 0,
> "new_b
"mdsearch_config": [],
"reshard_status": 0,
"new_bucket_instance_id": ""
When I run that shard setting to change the number of shards:
"radosgw-admin reshard add --bucket=BKTEST --num-shards=2"
Then run to get the status:
&q
Search the cluster log for 'Large omap object found' for more details.
On Wed, Aug 1, 2018 at 3:50 AM, Brent Kennedy wrote:
> Upgraded from 12.2.5 to 12.2.6, got a “1 large omap objects” warning
> message, then upgraded to 12.2.7 and the message went away. I just added
> four OSDs to balance out
Upgraded from 12.2.5 to 12.2.6, got a "1 large omap objects" warning
message, then upgraded to 12.2.7 and the message went away. I just added
four OSDs to balance out the cluster ( we had some servers with fewer drives
in them; jbod config ) and now the "1 large omap objects" warning message is
ba