[ceph-users] about rgw region and zone

2015-04-27 Thread TERRY
Hi: all when I Configuring Federated Gateways?? I got the error as below: sudo radosgw-agent -c /etc/ceph/ceph-data-sync.conf ERROR:root:Could not retrieve region map from destination Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", lin

[ceph-users] about rgw region sync

2015-04-29 Thread TERRY
hi: I am using the following script to setup my cluster. I upgrade my radosgw-agent from version 1.2.0 to 1.2.2-1. (1.2.0 will results a error!) cat repeat.sh #!/bin/bash set -e set -x #1 create pools sudo ./create_pools.sh #2 create a keyring sudo ceph-authtool --create-keyring /etc/

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
er 25 INFO:radosgw_agent.sync:26/64 items processed INFO:radosgw_agent.worker:finished processing shard 25 INFO:radosgw_agent.worker:32078 is processing shard number 26 INFO:radosgw_agent.sync:27/64 items processed INFO:radosgw_agent.worker:finished processing shard 26 INFO:radosgw_agent.worker:320

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
essing shard number 28 INFO:radosgw_agent.sync:29/64 items processed INFO:radosgw_agent.worker:finished processing shard 28 INFO:radosgw_agent.worker:32078 is processing shard number 29 INFO:radosgw_agent.sync:30/64 items processed INFO:radosgw_agent.worker:finished processing shard 29 INFO:rados

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
- ??: "Karan Singh";; : 2015??4??28??(??) 3:02 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region and zone Hi On 28 Apr 2015, at 07:12, TERRY <316828...@

[ceph-users] about rgw region and zone

2015-05-04 Thread TERRY
Hi: all when I Configuring Federated Gateways?? I got the error as below: sudo radosgw-agent -c /etc/ceph/ceph-data-sync.conf ERROR:root:Could not retrieve region map from destination Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", lin

[ceph-users] ?????? about rgw region sync

2015-05-06 Thread TERRY
) 1:50 ??: "TERRY"<316828...@qq.com>; : Re: [ceph-users] about rgw region sync Hello , a small question In step 5 , how did you generate access_key and secret keys ?? Did you use any tool to generate it or any command ? - Karan - On 30 Apr 201

[ceph-users] ?????? about rgw region sync

2015-05-08 Thread TERRY
t "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:finished processing shard 69 on the second cluster ??i do the follow steps?? 1??source self.env the content of self.env is : cat self.env export ST_AUTH="http://10.18

[ceph-users] ?????? about rgw region sync

2015-05-10 Thread TERRY
t "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:finished processing shard 69 on the second cluster ??i do the follow steps?? 1??source self.env the content of self.env is : cat self.env export ST_AUTH="http://10.18

[ceph-users] about rgw region sync

2015-05-12 Thread TERRY
I build two ceph clusters. for the first cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph osd pool create .us-east.rgw.control 64 64 sudo ceph osd pool create .us-east.rgw.gc 64 64 sudo ceph osd pool create .us-east.rgw.buckets 64 64 sudo c

[ceph-users] about rgw region sync

2015-05-12 Thread TERRY
could i build one region using two clusters?? each cluster has one zone?? so that I sync metadata and data from one cluster to another cluster?? I build two ceph clusters. for the first cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph

[ceph-users] ??????Re: about rgw region sync

2015-05-13 Thread TERRY
t work. It's more complicated to setup though. Anything outside of a test setup would need a lot of planning to make sure the two zones are as fault isolated as possible. I'm pretty sure you need separate RadosGW nodes for each zone. It could be possible to share, but it will be easier