Paul;

Yes, we are running a multi-site setup.

Re-sync would be acceptable at this point, as we only have 4 TiB in use right 
now.

Tearing down and reconfiguring the second site would also be acceptable, except 
that I've never been able to cleanly remove a zone from a zone group.  The only 
way I've found to remove a zone completely is to tear down the entire RADOSGW 
configuration (delete .rgw.root pool from both clusters).

Thank you,

Dominic L. Hilsbos, MBA 
Director – Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



-----Original Message-----
From: Paul Emmerich [mailto:paul.emmer...@croit.io] 
Sent: Tuesday, February 04, 2020 9:52 AM
To: Dominic Hilsbos
Cc: ceph-users
Subject: Re: [ceph-users] More OMAP Issues

Are you running a multi-site setup?
In this case it's best to set the default shard size to large enough
number *before* enabling multi-site.

If you didn't do this: well... I think the only way is still to
completely re-sync the second site...


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Tue, Feb 4, 2020 at 5:23 PM <dhils...@performair.com> wrote:
>
> All;
>
> We're backing to having large OMAP object warnings regarding our RGW index 
> pool.
>
> This cluster is now in production, so I can simply dump the buckets / pools 
> and hope everything works out.
>
> I did some additional research on this issue, and it looks like I need to 
> (re)shard the bucket (index?).  I found information that suggests that, for 
> older versions of Ceph, buckets couldn't be sharded after creation[1].  Other 
> information suggests the Nautilus (which we are running), can re-shard 
> dynamically, but not when multi-site replication is configured[2].
>
> This suggests that a "manual" resharding of a Nautilus cluster should be 
> possible, but I can't find the commands to do it.  Has anyone done this?  
> Does anyone have the commands to do it?  I can schedule down time for the 
> cluster, and take the RADOSGW instance(s), and dependent user services 
> offline.
>
> [1]: https://ceph.io/geen-categorie/radosgw-big-index/
> [2]: https://docs.ceph.com/docs/master/radosgw/dynamicresharding/
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to