So, I ended up checking all datalog shards with:
radosgw-admin data sync status --shard-id=XY --source-zone=us-east-1
and found one with a few hundred references to a bucket that had been
deleted.
I ended up shutting down HAProxy on both ends and running
radosgw-admin data sync init
This seeme
On 10/29/19 10:56 PM, Frank R wrote:
oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s
May be zone period is not the same on both sides?
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
Hi Konstantin,
Thanks very much for your help. Things seem to be running smoothly now.
One remaining issue on the secondary side is that I see:
-
oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s
-
Replication appears to be working fine when I upload files or create
b
On 10/27/19 6:01 AM, Frank R wrote:
I hate to be a pain but I have one more question.
After I run
radosgw-admin reshard stale-instances rm
if I run
radosgw-admin reshard stale-instances list
some new entries appear for a bucket that no longer exists. Is there a
way to cancel the operation o
On 10/24/19 11:00 PM, Frank R wrote:
After an RGW upgrade from 12.2.7 to 12.2.12 for RGW multisite a few
days ago the "sync status" has constantly shown a few "recovering
shards", ie:
-
# radosgw-admin sync status
realm 8f7fd3fd-f72d-411d-b06b-7b4b579f5f2f (prod)
zonegro