Hi everybody:
  I followed the instructions of Federated Configuration 
(http://ceph.com/docs/master/radosgw/federated-config/) to build single region 
and two zones environment. I have configured all settings successfully. In the 
final step, when the synchronization agent, 'radosgw-agent -c 
region-data-sync.conf' , is running, I saw the agent was synchronizing shards 
of data. After some testing by using 's3cmd', I found only metadata is 
replicated to secondary zone which is checked by using 'rados -p 
.us-east.rgw.buckets.index ls'. The data portion was missing.

I found the error message in the logging file of the 'radosgw-agent', version 
1.1, that showed 'unhashable type 'list''. So I modified the 
/usr/share/pyshared/radosgw_agent/worker.py to add 'tuple' according to the 
version 1.2.2. Then the another error message 'HttpError: Http error code 409 
content {"Code":"BucketAlreadyExists"}' occurred in the logging file.

Please give me some advices or suggestions. Any help would be much appreciated.

The packages information as follows:
OS                    Ubuntu 14.04 AMD64
Ceph               0.80.9-0ubuntu0.14.04.2
Ceph-deploy    1.4.0-0ubuntu1
Radosgw          0.80.9-0ubuntu0.14.04.2
Radosgw-agent       1.1-0ubuntu1

WD

---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for 
the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the 
content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should 
delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to