On Tue, Sep 27, 2016 at 10:19 PM, John Rowe wrote:
> Hi Orit,
>
> It appears it must have been one of the known bugs in 10.2.2. I just
> upgraded to 10.2.3 and bi-directional syncing now works.
>
Good
> I am still seeing some errors when I run synch-related commands but they
> don't seem to be
Hi Orit,
It appears it must have been one of the known bugs in 10.2.2. I just
upgraded to 10.2.3 and bi-directional syncing now works.
I am still seeing some errors when I run synch-related commands but they
don't seem to be affecting operations as of yet:
radosgw-admin sync status
2016-09-27 1
Hi Orit,
That was my failed attempt at sanitizing :)
They are actually all identical:
Periods:
MD5 (cephrgw1-1-dfw-period.json) = 12ed481381c1f2937a27b57db0473d6d
MD5 (cephrgw1-1-phx-period.json) = 12ed481381c1f2937a27b57db0473d6d
MD5 (cephrgw1-2-dfw-period.json) = 12ed481381c1f2937a27b57db0473d
see comment below
On Mon, Sep 26, 2016 at 10:00 PM, John Rowe wrote:
> Hi Orit,
>
> Sure thing, please see below.
>
> Thanks!
>
>
> DFW (Primary)
> radosgw-admin zonegroupmap get
> {
> "zonegroups": [
> {
> "key": "235b010c-22e2-4b43-8fcc-8ae01939273e",
> "val"
Hi John,
Can you provide:
radosgw-admin zonegroupmap get on both us-dfw and us-phx?
radosgw-admin realm get and radosgw-admin period get on all the gateways?
Orit
On Thu, Sep 22, 2016 at 4:37 PM, John Rowe wrote:
> Hello Orit, thanks.
>
> I will do all 6 just in case. Also as an FYI I originally
Hello Orit, thanks.
I will do all 6 just in case. Also as an FYI I originally had all 6 as
endpoint (3 in each zone) but have it down to just the two "1" servers
talking to each other until I can get it working. Eventually I would like
to have all 6 cross connecting again.
*rgw-primary-1:*
radosg
Hi John,
Can you provide your zonegroup and zones configurations on all 3 rgw?
(run the commands on each rgw)
Thanks,
Orit
On Wed, Sep 21, 2016 at 11:14 PM, John Rowe wrote:
> Hello,
>
> We have 2 Ceph clusters running in two separate data centers, each one with
> 3 mons, 3 rgws, and 5 osds. I a
Hello,
We have 2 Ceph clusters running in two separate data centers, each one with
3 mons, 3 rgws, and 5 osds. I am attempting to get bi-directional
multi-site replication setup as described in the ceph documentation here:
http://docs.ceph.com/docs/jewel/radosgw/multisite/
We are running Jewel v