Hi,

Update to 15.2.5. We have the same issue, in the relase notes they don’t 
mention anything regarding multisite, but once we updated everything started to 
work ith 15.2.5.

Best regards

From: Michael Breen <michael.br...@vikingenterprise.com>
Sent: Friday, November 6, 2020 10:40 PM
To: ceph-users@ceph.io
Subject: [Suspicious newsletter] [ceph-users] Re: Multisite sync not working - 
permission denied

Email received from outside the company. If in doubt don't click links nor open 
attachments!
________________________________
Continuing my fascinating conversation with myself:
The output of  radosgw-admin sync status  indicates that only the metadata is a 
problem, i.e., the data itself is syncing, and I have confirmed that. There is 
no S3 access to the secondary, zone-b, so I could not check replication that 
way, but having created a bucket on the primary, on the secondary I did
    rados -p zone-b.rgw.buckets.data ls
and saw the bucket had been replicated.
My current suspicion is that the user problem is an effect rather than a cause 
of the metadata sync problem.
I have also discovered a setting  debug_rgw_sync  which increases the debug 
level only for the sync code, but found nothing interesting. The additional 
output seemed all to relate to data rather than metadata.

On Fri, 6 Nov 2020 at 11:47, Michael Breen 
<michael.br...@vikingenterprise.com<mailto:michael.br...@vikingenterprise.com>> 
wrote:
I forgot to mention earlier attempted debugging: I believe this is not because 
the keys are wrong, but because it is looking for a user that is not seen on 
the secondary:

debug 2020-11-03T16:37:47.330+0000 7f32e9859700  5 req 60 0.003999986s 
:post_period error reading user info, uid=ACCESS can't authenticate
debug 2020-11-03T16:37:47.330+0000 7f32e9859700 20 req 60 0.003999986s 
:post_period rgw::auth::s3::LocalEngine denied with reason=-2028
debug 2020-11-03T16:37:47.330+0000 7f32e9859700 20 req 60 0.003999986s 
:post_period rgw::auth::s3::AWSAuthStrategy denied with reason=-2028
debug 2020-11-03T16:37:47.330+0000 7f32e9859700  5 req 60 0.003999986s 
:post_period Failed the auth strategy, reason=-2028
debug 2020-11-03T16:37:47.330+0000 7f32e9859700 10 failed to authorize request

src/rgw/rgw_common.h:#define ERR_INVALID_ACCESS_KEY   2028

./src/rgw/rgw_rest_s3.cc
  if (rgw_get_user_info_by_access_key(ctl->user, access_key_id, user_info) < 0) 
{
      ldpp_dout(dpp, 5) << "error reading user info, uid=" << access_key_id
              << " can't authenticate" << dendl;

On Fri, 6 Nov 2020 at 11:38, Michael Breen 
<michael.br...@vikingenterprise.com<mailto:michael.br...@vikingenterprise.com>> 
wrote:
Hi,

radosgw-admin -v
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

Multisite sync was something I had working with a previous cluster and an 
earlier Ceph version, but it doesn't now, and I can't understand why.
If anyone with an idea of a possible cause could give me a clue I would be 
grateful.
I have clusters set up using Rook, but as far as I can tell, that's not a 
factor.

On the primary cluster, I have this:

radosgw-admin zonegroup get --rgw-zonegroup zonegroup-a
{
    "id": "b115d74a-2d5f-4127-b621-0223f1e96c71",
    "name": "zonegroup-a",
    "api_name": "zonegroup-a",
    "is_master": "true",
    "endpoints": [
        "http://192.168.30.8:80";
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "024687e0-1461-4f45-9149-9e571791c2b3",
    "zones": [
        {
            "id": "024687e0-1461-4f45-9149-9e571791c2b3",
            "name": "zone-a",
            "endpoints": [
                "http://192.168.30.8:80";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        },
        {
            "id": "6ba0ee26-0155-48f9-b057-2803336f0d66",
            "name": "zone-b",
            "endpoints": [
                "http://192.168.30.108:80";
            ],
            "log_meta": "false",
            "log_data": "true",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "8c38fa05-c19d-4e30-bc98-e2bc84eccb68",
    "sync_policy": {
        "groups": []
    }
}

It's identical on the secondary (that's after a realm pull, an update of the 
zone-b endpoints, and a period commit), which I double-checked by piping the 
output to md5sum on both sides.
The system user created on the primary is

radosgw-admin user info --uid realm-a-system-user
{
    ...
    "keys": [
        {
            "user": "realm-a-system-user",
            "access_key": "IUs+USI5IjA8WkZPRjU=",
            "secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
        }
    ...
}

The zones on both sides have these keys

radosgw-admin zone get --rgw-zone zone-a
{
    ...
    "system_key": {
        "access_key": "IUs+USI5IjA8WkZPRjU=",
        "secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
    },
    ...
}

radosgw-admin zone get --rgw-zonegroup zonegroup-a --rgw-zone zone-b
{
    ...
    "system_key": {
        "access_key": "IUs+USI5IjA8WkZPRjU=",
        "secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
    },
    ...
}


Yet, on the secondary

radosgw-admin sync status
          realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
      zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
           zone 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
  metadata sync preparing for full sync
                full sync: 64/64 shards
                full sync: 0 entries to sync
                incremental sync: 0/64 shards
                metadata is behind on 64 shards
                behind shards: 
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
      data sync source: 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

and on the primary

radosgw-admin sync status
          realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
      zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
           zone 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
  metadata sync no sync (zone is master)
2020-11-06T10:58:46.345+0000 7fa805c201c0  0 data sync zone:6ba0ee26 ERROR: 
failed to fetch datalog info
      data sync source: 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
                        failed to retrieve sync info: (13) Permission denied

Given that all the keys above match, that "permission denied" is a mystery to 
me, but it does accord with:

export AWS_ACCESS_KEY_ID="IUs+USI5IjA8WkZPRjU="
export AWS_SECRET_ACCESS_KEY="PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.8     # OK, but:
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.108
# ERROR: S3 error: 403 (InvalidAccessKeyId)
# Although
curl -L http://192.168.30.108  # works: <?xml version="1.0" encoding="UTF-8 ...

192.168.30.108 is the external IP, but just to be certain I was hitting zone-b, 
I tried this also within the cluster using its internal IP

s3cmd ls --no-ssl --host-bucket= --host=10.41.157.115
# ERROR: S3 error: 403 (InvalidAccessKeyId)

This seems to be the reason it's not syncing, but why?
The user with those keys existed on the primary before the realm pull, in 
agreement with every procedure I have seen for setting up multisite.

Any suggestions?
Regards,
Michael


CONFIDENTIALITY
This e-mail message and any attachments thereto, is intended only for use by 
the addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail 
message, you are hereby notified that any dissemination, distribution or 
copying of this e-mail message, and any attachments thereto, is strictly 
prohibited. If you have received this e-mail message in error, please 
immediately notify the sender and permanently delete the original and any 
copies of this email and any prints thereof.
ABSENT AN EXPRESS STATEMENT TO THE CONTRARY HEREINABOVE, THIS E-MAIL IS NOT 
INTENDED AS A SUBSTITUTE FOR A WRITING. Notwithstanding the Uniform Electronic 
Transactions Act or the applicability of any other law of similar substance and 
effect, absent an express statement to the contrary hereinabove, this e-mail 
message its contents, and any attachments hereto are not intended to represent 
an offer or acceptance to enter into a contract and are not otherwise intended 
to bind the sender, Sanmina Corporation (or any of its subsidiaries), or any 
other person or entity.

________________________________
This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to