Even after upgrading to Reef and enabling resharding on the multi-site
cluster.
The large omap objects did not go away.
Today I noticed in the Squid release notes two new commands:
- radosgw-admin bucket check olh [--fix]
- radosgw-admin bucket check unlinked [--fix]
Two questions:
1. Is there a c
> Thank you for the information, Christian. When you reshard the bucket id is
> updated (with most recent versions of ceph, a generation number is
> incremented). The first bucket id matches the bucket marker, but after the
> first reshard they diverge.
This makes a lot of sense and explains wh
Thank you for the information, Christian. When you reshard the bucket id is
updated (with most recent versions of ceph, a generation number is
incremented). The first bucket id matches the bucket marker, but after the
first reshard they diverge.
The bucket id is in the names of the currently us
Hi Eric,
> 1. I recommend that you *not* issue another bucket reshard until you figure
> out what’s going on.
Thanks, noted!
> 2. Which version of Ceph are you using?
17.2.5
I wanted to get the Cluster to Health OK before upgrading. I didn't
see anything that led me to believe that an upgrade c
1. I recommend that you *not* issue another bucket reshard until you figure out
what’s going on.
2. Which version of Ceph are you using?
3. Can you issue a `radosgw-admin metadata get bucket:` so we can
verify what the current marker is?
4. After you resharded previously, did you get command-line