dhils...@performair.com
> Sent: Friday, November 15, 2019 9:13 AM
> To: ceph-users@lists.ceph.com
> Cc: Stephen Self
> Subject: Re: [ceph-users] Large OMAP Object
>
> Wido;
>
> Ok, yes, I have tracked it down to the index for one of our buckets. I
> missed the ID in the c
dhils...@performair.com
> Sent: Friday, November 15, 2019 9:13 AM
> To: ceph-users@lists.ceph.com
> Cc: Stephen Self
> Subject: Re: [ceph-users] Large OMAP Object
>
> Wido;
>
> Ok, yes, I have tracked it down to the index for one of our buckets. I
> missed the ID in the c
dhils...@performair.com
Sent: Friday, November 15, 2019 9:13 AM
To: ceph-users@lists.ceph.com
Cc: Stephen Self
Subject: Re: [ceph-users] Large OMAP Object
Wido;
Ok, yes, I have tracked it down to the index for one of our buckets. I missed
the ID in the ceph df output previously. Next time I'll wait to re
@performair.com
www.PerformAir.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
den Hollander
Sent: Friday, November 15, 2019 8:40 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Large OMAP Object
On 11/15/19 4:35 PM, dhils...
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul
Emmerich
Sent: Friday, November 15, 2019 8:48 AM
To: Wido den Hollander
Cc: Ceph Users
Subject: Re: [ceph-users] Large OMAP Object
Note that the size limit changed from 2M keys to 200k keys recently
(14.2.3 or 14.2.2 or
minic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
> > dhils...@performair.com
> > www.PerformAir.com
> >
> >
> >
> > -----Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> >
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Friday, November 15, 2019 1:56 AM
> To: Dominic Hilsbos
r.com
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: Friday, November 15, 2019 1:56 AM
To: Dominic Hilsbos; ceph-users@lists.ceph.com
Cc: Stephen Self
Subject: Re: [ceph-users] Large OMAP Object
Did you check /var/log/ceph/ceph.log on one of the Monitors to see
Did you check /var/log/ceph/ceph.log on one of the Monitors to see which
pool and Object the large Object is in?
Wido
On 11/15/19 12:23 AM, dhils...@performair.com wrote:
> All;
>
> We had a warning about a large OMAP object pop up in one of our clusters
> overnight. The cluster is configured
Hi
this probably comes from your RGW which is a big consumer/producer of OMAP for
bucket indexes.
Have a look at this previous post and just adapt the pool name to match the one
where it’s detected: https://www.spinics.net/lists/ceph-users/msg51681.html
Regards
JC
> On Nov 14, 2019, at 15:23,
All;
We had a warning about a large OMAP object pop up in one of our clusters
overnight. The cluster is configured for CephFS, but nothing mounts a CephFS,
at this time.
The cluster mostly uses RGW. I've checked the cluster log, the MON log, and
the MGR log on one of the mons, with no useful
On 6/11/19 9:48 PM, J. Eric Ivancich wrote:
> Hi Wido,
>
> Interleaving below
>
> On 6/11/19 3:10 AM, Wido den Hollander wrote:
>>
>> I thought it was resolved, but it isn't.
>>
>> I counted all the OMAP values for the GC objects and I got back:
>>
>> gc.0: 0
>> gc.11: 0
>> gc.14: 0
>> gc.
Hi Wido,
Interleaving below
On 6/11/19 3:10 AM, Wido den Hollander wrote:
>
> I thought it was resolved, but it isn't.
>
> I counted all the OMAP values for the GC objects and I got back:
>
> gc.0: 0
> gc.11: 0
> gc.14: 0
> gc.15: 0
> gc.16: 0
> gc.18: 0
> gc.19: 0
> gc.1: 0
> gc.20: 0
> g
On 6/4/19 8:00 PM, J. Eric Ivancich wrote:
> On 6/4/19 7:37 AM, Wido den Hollander wrote:
>> I've set up a temporary machine next to the 13.2.5 cluster with the
>> 13.2.6 packages from Shaman.
>>
>> On that machine I'm running:
>>
>> $ radosgw-admin gc process
>>
>> That seems to work as intende
On 6/4/19 7:37 AM, Wido den Hollander wrote:
> I've set up a temporary machine next to the 13.2.5 cluster with the
> 13.2.6 packages from Shaman.
>
> On that machine I'm running:
>
> $ radosgw-admin gc process
>
> That seems to work as intended! So the PR seems to have fixed it.
>
> Should be f
On 5/30/19 2:45 PM, Wido den Hollander wrote:
>
>
> On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
>> Hi Wido,
>>
>> When you run `radosgw-admin gc list`, I assume you are *not* using the
>> "--include-all" flag, right? If you're not using that flag, then
>> everything listed should be expired a
On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
> Hi Wido,
>
> When you run `radosgw-admin gc list`, I assume you are *not* using the
> "--include-all" flag, right? If you're not using that flag, then
> everything listed should be expired and be ready for clean-up. If after
> running `radosgw-admi
Hi Wido,
When you run `radosgw-admin gc list`, I assume you are *not* using the
"--include-all" flag, right? If you're not using that flag, then
everything listed should be expired and be ready for clean-up. If after
running `radosgw-admin gc process` the same entries appear in
`radosgw-admin gc l
Hi,
I've got a Ceph cluster with this status:
health: HEALTH_WARN
3 large omap objects
After looking into it I see that the issue comes from objects in the
'.rgw.gc' pool.
Investigating it I found that the gc.* objects have a lot of OMAP keys:
for OBJ in $(rados -p .rgw.gc ls);
Thanks Casey. This helped me understand the purpose of this pool. I
trimmed the usage logs which reduced the number of keys stored in that
index significantly and I may even disable the usage log entirely as I
don't believe we use it for anything.
On Fri, May 24, 2019 at 3:51 PM Casey Bodley wrot
On 5/24/19 1:15 PM, shubjero wrote:
Thanks for chiming in Konstantin!
Wouldn't setting this value to 0 disable the sharding?
Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
ob
Thanks for chiming in Konstantin!
Wouldn't setting this value to 0 disable the sharding?
Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
object, a value of zero indicates there is
in the config.
```"rgw_override_bucket_index_max_shards": "8",```. Should this be
increased?
Should be decreased to default `0`, I think.
Modern Ceph releases resolve large omaps automatically via bucket
dynamic resharding:
```
{
"option": {
"name": "rgw_dynamic_resharding",
Hi there,
We have an old cluster that was built on Giant that we have maintained and
upgraded over time and are now running Mimic 13.2.5. The other day we
received a HEALTH_WARN about 1 large omap object in the pool '.usage' which
is our usage_log_pool defined in our radosgw zone.
I am trying to
There may be a mismatch between be auto-restarting and the omap warning
code. Looks like you already have 349 shards, with 13 of them warning on
size!
You can increase a config value to shut that error up, but you may want to
get somebody from RGW to look at how you’ve managed to exceed those defau
Hello,
i am running a ceph 13.2.0 cluster exclusively for radosrw / s3.
i only have one big bucket. and the cluster is currently in warning state:
cluster:
id: d605c463-9f1c-4d91-a390-a28eedb21650
health: HEALTH_WARN
13 large omap objects
i tried to google it, but i wa
26 matches
Mail list logo