Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Paul Emmerich
dhils...@performair.com > Sent: Friday, November 15, 2019 9:13 AM > To: ceph-users@lists.ceph.com > Cc: Stephen Self > Subject: Re: [ceph-users] Large OMAP Object > > Wido; > > Ok, yes, I have tracked it down to the index for one of our buckets. I > missed the ID in the c

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Nathan Fish
dhils...@performair.com > Sent: Friday, November 15, 2019 9:13 AM > To: ceph-users@lists.ceph.com > Cc: Stephen Self > Subject: Re: [ceph-users] Large OMAP Object > > Wido; > > Ok, yes, I have tracked it down to the index for one of our buckets. I > missed the ID in the c

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread DHilsbos
dhils...@performair.com Sent: Friday, November 15, 2019 9:13 AM To: ceph-users@lists.ceph.com Cc: Stephen Self Subject: Re: [ceph-users] Large OMAP Object Wido; Ok, yes, I have tracked it down to the index for one of our buckets. I missed the ID in the ceph df output previously. Next time I'll wait to re

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
@performair.com www.PerformAir.com -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Hollander Sent: Friday, November 15, 2019 8:40 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Large OMAP Object On 11/15/19 4:35 PM, dhils...

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul Emmerich Sent: Friday, November 15, 2019 8:48 AM To: Wido den Hollander Cc: Ceph Users Subject: Re: [ceph-users] Large OMAP Object Note that the size limit changed from 2M keys to 200k keys recently (14.2.3 or 14.2.2 or

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Paul Emmerich
minic L. Hilsbos, MBA > > Director – Information Technology > > Perform Air International Inc. > > dhils...@performair.com > > www.PerformAir.com > > > > > > > > -----Original Message- > > From: Wido den Hollander [mailto:w...@42on.com] > >

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Wido den Hollander
> Director – Information Technology > Perform Air International Inc. > dhils...@performair.com > www.PerformAir.com > > > > -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: Friday, November 15, 2019 1:56 AM > To: Dominic Hilsbos

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread DHilsbos
r.com -Original Message- From: Wido den Hollander [mailto:w...@42on.com] Sent: Friday, November 15, 2019 1:56 AM To: Dominic Hilsbos; ceph-users@lists.ceph.com Cc: Stephen Self Subject: Re: [ceph-users] Large OMAP Object Did you check /var/log/ceph/ceph.log on one of the Monitors to see

Re: [ceph-users] Large OMAP Object

2019-11-15 Thread Wido den Hollander
Did you check /var/log/ceph/ceph.log on one of the Monitors to see which pool and Object the large Object is in? Wido On 11/15/19 12:23 AM, dhils...@performair.com wrote: > All; > > We had a warning about a large OMAP object pop up in one of our clusters > overnight. The cluster is configured

Re: [ceph-users] Large OMAP Object

2019-11-14 Thread JC Lopez
Hi this probably comes from your RGW which is a big consumer/producer of OMAP for bucket indexes. Have a look at this previous post and just adapt the pool name to match the one where it’s detected: https://www.spinics.net/lists/ceph-users/msg51681.html Regards JC > On Nov 14, 2019, at 15:23,

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-12 Thread Wido den Hollander
On 6/11/19 9:48 PM, J. Eric Ivancich wrote: > Hi Wido, > > Interleaving below > > On 6/11/19 3:10 AM, Wido den Hollander wrote: >> >> I thought it was resolved, but it isn't. >> >> I counted all the OMAP values for the GC objects and I got back: >> >> gc.0: 0 >> gc.11: 0 >> gc.14: 0 >> gc.

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread J. Eric Ivancich
Hi Wido, Interleaving below On 6/11/19 3:10 AM, Wido den Hollander wrote: > > I thought it was resolved, but it isn't. > > I counted all the OMAP values for the GC objects and I got back: > > gc.0: 0 > gc.11: 0 > gc.14: 0 > gc.15: 0 > gc.16: 0 > gc.18: 0 > gc.19: 0 > gc.1: 0 > gc.20: 0 > g

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread Wido den Hollander
On 6/4/19 8:00 PM, J. Eric Ivancich wrote: > On 6/4/19 7:37 AM, Wido den Hollander wrote: >> I've set up a temporary machine next to the 13.2.5 cluster with the >> 13.2.6 packages from Shaman. >> >> On that machine I'm running: >> >> $ radosgw-admin gc process >> >> That seems to work as intende

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread J. Eric Ivancich
On 6/4/19 7:37 AM, Wido den Hollander wrote: > I've set up a temporary machine next to the 13.2.5 cluster with the > 13.2.6 packages from Shaman. > > On that machine I'm running: > > $ radosgw-admin gc process > > That seems to work as intended! So the PR seems to have fixed it. > > Should be f

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-04 Thread Wido den Hollander
On 5/30/19 2:45 PM, Wido den Hollander wrote: > > > On 5/29/19 11:22 PM, J. Eric Ivancich wrote: >> Hi Wido, >> >> When you run `radosgw-admin gc list`, I assume you are *not* using the >> "--include-all" flag, right? If you're not using that flag, then >> everything listed should be expired a

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-30 Thread Wido den Hollander
On 5/29/19 11:22 PM, J. Eric Ivancich wrote: > Hi Wido, > > When you run `radosgw-admin gc list`, I assume you are *not* using the > "--include-all" flag, right? If you're not using that flag, then > everything listed should be expired and be ready for clean-up. If after > running `radosgw-admi

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-29 Thread J. Eric Ivancich
Hi Wido, When you run `radosgw-admin gc list`, I assume you are *not* using the "--include-all" flag, right? If you're not using that flag, then everything listed should be expired and be ready for clean-up. If after running `radosgw-admin gc process` the same entries appear in `radosgw-admin gc l

Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
Thanks Casey. This helped me understand the purpose of this pool. I trimmed the usage logs which reduced the number of keys stored in that index significantly and I may even disable the usage log entirely as I don't believe we use it for anything. On Fri, May 24, 2019 at 3:51 PM Casey Bodley wrot

Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread Casey Bodley
On 5/24/19 1:15 PM, shubjero wrote: Thanks for chiming in Konstantin! Wouldn't setting this value to 0 disable the sharding? Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ rgw override bucket index max shards Description:Represents the number of shards for the bucket index ob

Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread shubjero
Thanks for chiming in Konstantin! Wouldn't setting this value to 0 disable the sharding? Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ rgw override bucket index max shards Description:Represents the number of shards for the bucket index object, a value of zero indicates there is

Re: [ceph-users] large omap object in usage_log_pool

2019-05-23 Thread Konstantin Shalygin
in the config. ```"rgw_override_bucket_index_max_shards": "8",```. Should this be increased? Should be decreased to default `0`, I think. Modern Ceph releases resolve large omaps automatically via bucket dynamic resharding: ``` {     "option": {     "name": "rgw_dynamic_resharding",

Re: [ceph-users] large omap object

2018-06-14 Thread Gregory Farnum
There may be a mismatch between be auto-restarting and the omap warning code. Looks like you already have 349 shards, with 13 of them warning on size! You can increase a config value to shut that error up, but you may want to get somebody from RGW to look at how you’ve managed to exceed those defau