[ceph-users] about rgw region and zone
Hi: all when I Configuring Federated Gateways?? I got the error as below: sudo radosgw-agent -c /etc/ceph/ceph-data-sync.conf ERROR:root:Could not retrieve region map from destination Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", line 269, in main region_map = client.get_region_map(dest_conn) File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 391, in get_region_map region_map = request(connection, 'get', 'admin/config') File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 155, in request check_result_status(result) File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 116, in check_result_status HttpError)(result.status_code, result.content) NotFound: Http error code 404 content {"Code":"NoSuchKey"} I have some quesions when I execute the command 1??radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-west-1 i have no idea about the option --name , what's the difference if i do it without --name config; 2??Create a Region there is a conversation near the end of doc : If you use different Ceph Storage Cluster instances for regions, you should repeat steps 2, 4 and 5 in by executing them with --nameclient.radosgw-us-west-1. You may also export the region map from the initial gateway instance and import it followed by updating the region map. I has one cluster named ceph, one region named us, and two zones: us-east?? us-west?? us-east is the master zone. I has two gateway instances??client.radosgw.us-east-1??client.radosgw.us-west-1. Do i need repeat steps 2,4,and 5? do i need export the region map from the initial gateway instance and import it___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] about rgw region sync
hi: I am using the following script to setup my cluster. I upgrade my radosgw-agent from version 1.2.0 to 1.2.2-1. (1.2.0 will results a error!) cat repeat.sh #!/bin/bash set -e set -x #1 create pools sudo ./create_pools.sh #2 create a keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-west-1 --gen-key sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool -n client.radosgw.us-west-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth del client.radosgw.us-east-1 sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth del client.radosgw.us-west-1 sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring # 3 create a region sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1 set +e sudo rados -p .us.rgw.root rm region_info.default set -e sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 # try don't do it sudo radosgw-admin region set --infile us.json --name client.radosgw.us-west-1 set +e sudo rados -p .us.rgw.root rm region_info.default set -e sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-west-1 sudo radosgw-admin regionmap update --name client.radosgw.us-west-1 # 4 create zones # try chanege us-east-no-secert.json file contents sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-no-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-no-secert.json --name client.radosgw.us-west-1 sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west-no-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west-no-secert.json --name client.radosgw.us-west-1 set +e sudo rados -p .rgw.root rm zone_info.default set -e sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 # try don't do it sudo radosgw-admin regionmap update --name client.radosgw.us-west-1 #5 Create Zone Users system user sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-west-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system #6 subuser create #may create a user without --system? sudo radosgw-admin subuser create --uid="us-east" --subuser="us-east:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin subuser create --uid="us-west" --subuser="us-west:swift" --access=full --name client.radosgw.us-west-1 --key-type swift --secret="BBJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin subuser create --uid="us-east" --subuser="us-east:swift" --access=full --name client.radosgw.us-west-1 --key-type swift --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin subuser create --uid="us-west" --subuser="us-west:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="BBJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" #5.5 creat zone users not system user sudo radosgw-admin user create --uid="us-test-east" --display-name="Region-US Zone-East-test" --name client.radosgw.us-east-1 --access_key="DDK0ST8WXTMWZGN29NF9" --secret="DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin user create --uid="us-test-west" --display-name="Region-US Zone-West-test" --name client.radosgw.us-west-1 --access_key="CCK0ST8WXTMWZGN29NF9" --secret="CCJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin user create --uid="us-test-east" --display-name="Region-US Zone-East-test" --name client.radosgw.us-west-1 --access_key="DDK0ST8WXTMWZGN29NF9" --secret="
[ceph-users] ???????????? about rgw region and zone
7 INFO:radosgw_agent.worker:32078 is processing shard number 18 INFO:radosgw_agent.sync:19/64 items processed INFO:radosgw_agent.worker:finished processing shard 18 INFO:radosgw_agent.worker:32078 is processing shard number 19 INFO:radosgw_agent.sync:20/64 items processed INFO:radosgw_agent.worker:finished processing shard 19 INFO:radosgw_agent.worker:32078 is processing shard number 20 INFO:radosgw_agent.sync:21/64 items processed INFO:radosgw_agent.worker:finished processing shard 20 INFO:radosgw_agent.worker:32078 is processing shard number 21 INFO:radosgw_agent.sync:22/64 items processed INFO:radosgw_agent.worker:finished processing shard 21 INFO:radosgw_agent.worker:32078 is processing shard number 22 INFO:radosgw_agent.sync:23/64 items processed INFO:radosgw_agent.worker:finished processing shard 22 INFO:radosgw_agent.worker:32078 is processing shard number 23 INFO:radosgw_agent.sync:24/64 items processed INFO:radosgw_agent.worker:finished processing shard 23 INFO:radosgw_agent.worker:32078 is processing shard number 24 INFO:radosgw_agent.sync:25/64 items processed INFO:radosgw_agent.worker:finished processing shard 24 INFO:radosgw_agent.worker:32078 is processing shard number 25 INFO:radosgw_agent.sync:26/64 items processed INFO:radosgw_agent.worker:finished processing shard 25 INFO:radosgw_agent.worker:32078 is processing shard number 26 INFO:radosgw_agent.sync:27/64 items processed INFO:radosgw_agent.worker:finished processing shard 26 INFO:radosgw_agent.worker:32078 is processing shard number 27 INFO:radosgw_agent.sync:28/64 items processed INFO:radosgw_agent.worker:finished processing shard 27 INFO:radosgw_agent.worker:32078 is processing shard number 28 INFO:radosgw_agent.sync:29/64 items processed INFO:radosgw_agent.worker:finished processing shard 28 INFO:radosgw_agent.worker:32078 is processing shard number 29 INFO:radosgw_agent.sync:30/64 items processed INFO:radosgw_agent.worker:finished processing shard 29 INFO:radosgw_agent.worker:32078 is processing shard number 30 INFO:radosgw_agent.sync:31/64 items processed INFO:radosgw_agent.worker:finished processing shard 30 INFO:radosgw_agent.worker:32078 is processing shard number 31 ERROR:radosgw_agent.worker:failed to sync object my-container/us.json: state is error I already create new users us-test-east, us-test-west, and new subusers us-test-east:swift, us-test-west:swift I don't know how to create some bucket. could you give some example? thanks -- -- ??: "Karan Singh";; : 2015??4??29??(??) 2:23 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region and zone You should try to create a new user without ??system option so basically create a normal user , then create some bucket and object and finally try to resync cluster. Karan Singh Systems Specialist , Storage Platforms CSC - IT Center for Science, Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland mobile: +358 503 812758 tel. +358 9 4572001 fax +358 9 4572302 http://www.csc.fi/ On 28 Apr 2015, at 21:22, Karan Singh wrote: You should try to create a new user without ??system option so basically create a normal user , then create some bucket and object and finally try to resync cluster. -Karan- On 28 Apr 2015, at 10:32, TERRY <316828...@qq.com> wrote: Hi?? karan?? singh. First of all thank you so much for replying and giving your precious time to this problem! I tried repeat steps 2, 4 and 5 in by executing them with --nameclient.radosgw-us-west-1. and the case has progressed a lot??(below are some the logs) I am now getting sudo radosgw-agent -c ceph-data-sync2.conf region map is: {u'us': [u'us-west', u'us-east']} INFO:radosgw_agent.sync:Starting sync INFO:radosgw_agent.worker:20585 is processing shard number 0 INFO:radosgw_agent.worker:finished processing shard 0 INFO:radosgw_agent.worker:20585 is processing shard number 1 INFO:radosgw_agent.sync:1/64 items processed INFO:radosgw_agent.sync:2/64 items processed INFO:radosgw_agent.worker:finished processing shard 1 INFO:radosgw_agent.worker:20585 is processing shard number 2 INFO:radosgw_agent.sync:3/64 items processed INFO:radosgw_agent.worker:finished processing shard 2 INFO:radosgw_agent.worker:20585 is processing shard number 3 INFO:radosgw_agent.sync:4/64 items processed INFO:radosgw_agent.worker:finished processing shard 3 INFO:radosgw_agent.worker:20585 is processing shard number 4 INFO:radosgw_agent.sync:5/64 items processed INFO:radosgw_agent.worker:finished processing shard 4 INFO:radosgw_agent.worker:20585 is processing shard number 5 INFO:radosgw_agent.sync:6/64 items processed INFO:radosgw_agent.worker:finished pro
[ceph-users] ???????????? about rgw region and zone
shed processing shard 15 INFO:radosgw_agent.worker:32078 is processing shard number 16 INFO:radosgw_agent.sync:17/64 items processed INFO:radosgw_agent.worker:finished processing shard 16 INFO:radosgw_agent.worker:32078 is processing shard number 17 INFO:radosgw_agent.sync:18/64 items processed INFO:radosgw_agent.worker:finished processing shard 17 INFO:radosgw_agent.worker:32078 is processing shard number 18 INFO:radosgw_agent.sync:19/64 items processed INFO:radosgw_agent.worker:finished processing shard 18 INFO:radosgw_agent.worker:32078 is processing shard number 19 INFO:radosgw_agent.sync:20/64 items processed INFO:radosgw_agent.worker:finished processing shard 19 INFO:radosgw_agent.worker:32078 is processing shard number 20 INFO:radosgw_agent.sync:21/64 items processed INFO:radosgw_agent.worker:finished processing shard 20 INFO:radosgw_agent.worker:32078 is processing shard number 21 INFO:radosgw_agent.sync:22/64 items processed INFO:radosgw_agent.worker:finished processing shard 21 INFO:radosgw_agent.worker:32078 is processing shard number 22 INFO:radosgw_agent.sync:23/64 items processed INFO:radosgw_agent.worker:finished processing shard 22 INFO:radosgw_agent.worker:32078 is processing shard number 23 INFO:radosgw_agent.sync:24/64 items processed INFO:radosgw_agent.worker:finished processing shard 23 INFO:radosgw_agent.worker:32078 is processing shard number 24 INFO:radosgw_agent.sync:25/64 items processed INFO:radosgw_agent.worker:finished processing shard 24 INFO:radosgw_agent.worker:32078 is processing shard number 25 INFO:radosgw_agent.sync:26/64 items processed INFO:radosgw_agent.worker:finished processing shard 25 INFO:radosgw_agent.worker:32078 is processing shard number 26 INFO:radosgw_agent.sync:27/64 items processed INFO:radosgw_agent.worker:finished processing shard 26 INFO:radosgw_agent.worker:32078 is processing shard number 27 INFO:radosgw_agent.sync:28/64 items processed INFO:radosgw_agent.worker:finished processing shard 27 INFO:radosgw_agent.worker:32078 is processing shard number 28 INFO:radosgw_agent.sync:29/64 items processed INFO:radosgw_agent.worker:finished processing shard 28 INFO:radosgw_agent.worker:32078 is processing shard number 29 INFO:radosgw_agent.sync:30/64 items processed INFO:radosgw_agent.worker:finished processing shard 29 INFO:radosgw_agent.worker:32078 is processing shard number 30 INFO:radosgw_agent.sync:31/64 items processed INFO:radosgw_agent.worker:finished processing shard 30 INFO:radosgw_agent.worker:32078 is processing shard number 31 ERROR:radosgw_agent.worker:failed to sync object my-container/us.json: state is error I already create new users us-test-east, us-test-west, and new subusers us-test-east:swift, us-test-west:swift I don't know how to create some bucket. could you give some example? thanks -- -- ??: "Karan Singh";; : 2015??4??29??(??) 2:23 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region and zone You should try to create a new user without ??system option so basically create a normal user , then create some bucket and object and finally try to resync cluster. Karan Singh Systems Specialist , Storage Platforms CSC - IT Center for Science, Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland mobile: +358 503 812758 tel. +358 9 4572001 fax +358 9 4572302 http://www.csc.fi/ On 28 Apr 2015, at 21:22, Karan Singh wrote: You should try to create a new user without ??system option so basically create a normal user , then create some bucket and object and finally try to resync cluster. -Karan- On 28 Apr 2015, at 10:32, TERRY <316828...@qq.com> wrote: Hi?? karan?? singh. First of all thank you so much for replying and giving your precious time to this problem! I tried repeat steps 2, 4 and 5 in by executing them with --nameclient.radosgw-us-west-1. and the case has progressed a lot??(below are some the logs) I am now getting sudo radosgw-agent -c ceph-data-sync2.conf region map is: {u'us': [u'us-west', u'us-east']} INFO:radosgw_agent.sync:Starting sync INFO:radosgw_agent.worker:20585 is processing shard number 0 INFO:radosgw_agent.worker:finished processing shard 0 INFO:radosgw_agent.worker:20585 is processing shard number 1 INFO:radosgw_agent.sync:1/64 items processed INFO:radosgw_agent.sync:2/64 items processed INFO:radosgw_agent.worker:finished processing shard 1 INFO:radosgw_agent.worker:20585 is processing shard number 2 INFO:radosgw_agent.sync:3/64 items processed INFO:radosgw_agent.worker:finished processing shard 2 INFO:radosgw_agent.worker:20585 is processing shard number 3 INFO:radosgw_agent.sync:4/64 items processed INFO:radosgw_agent.worker:fini
[ceph-users] ???????????? about rgw region and zone
-- -- ??: "316828252";<316828...@qq.com>; : 2015??4??28??(??) 3:32 ??: "Karan Singh"; : ?? [ceph-users] about rgw region and zone Hi?? karan?? singh. First of all thank you so much for replying and giving your precious time to this problem! I tried repeat steps 2, 4 and 5 in by executing them with --nameclient.radosgw-us-west-1. and the case has progressed a lot??(below are some the logs) I am now getting sudo radosgw-agent -c ceph-data-sync2.conf region map is: {u'us': [u'us-west', u'us-east']} INFO:radosgw_agent.sync:Starting sync INFO:radosgw_agent.worker:20585 is processing shard number 0 INFO:radosgw_agent.worker:finished processing shard 0 INFO:radosgw_agent.worker:20585 is processing shard number 1 INFO:radosgw_agent.sync:1/64 items processed INFO:radosgw_agent.sync:2/64 items processed INFO:radosgw_agent.worker:finished processing shard 1 INFO:radosgw_agent.worker:20585 is processing shard number 2 INFO:radosgw_agent.sync:3/64 items processed INFO:radosgw_agent.worker:finished processing shard 2 INFO:radosgw_agent.worker:20585 is processing shard number 3 INFO:radosgw_agent.sync:4/64 items processed INFO:radosgw_agent.worker:finished processing shard 3 INFO:radosgw_agent.worker:20585 is processing shard number 4 INFO:radosgw_agent.sync:5/64 items processed INFO:radosgw_agent.worker:finished processing shard 4 INFO:radosgw_agent.worker:20585 is processing shard number 5 INFO:radosgw_agent.sync:6/64 items processed INFO:radosgw_agent.worker:finished processing shard 5 INFO:radosgw_agent.worker:20585 is processing shard number 6 INFO:radosgw_agent.sync:7/64 items processed INFO:radosgw_agent.worker:finished processing shard 6 INFO:radosgw_agent.worker:20585 is processing shard number 7 INFO:radosgw_agent.sync:8/64 items processed INFO:radosgw_agent.worker:finished processing shard 7 INFO:radosgw_agent.worker:20585 is processing shard number 8 INFO:radosgw_agent.sync:9/64 items processed INFO:radosgw_agent.worker:finished processing shard 8 INFO:radosgw_agent.worker:20585 is processing shard number 9 INFO:radosgw_agent.sync:10/64 items processed INFO:radosgw_agent.worker:finished processing shard 9 INFO:radosgw_agent.worker:20585 is processing shard number 10 INFO:radosgw_agent.sync:11/64 items processed INFO:radosgw_agent.worker:finished processing shard 10 INFO:radosgw_agent.worker:20585 is processing shard number 11 INFO:radosgw_agent.sync:12/64 items processed INFO:radosgw_agent.worker:finished processing shard 11 INFO:radosgw_agent.worker:20585 is processing shard number 12 INFO:radosgw_agent.sync:13/64 items processed INFO:radosgw_agent.worker:finished processing shard 12 INFO:radosgw_agent.worker:20585 is processing shard number 13 INFO:radosgw_agent.sync:14/64 items processed INFO:radosgw_agent.worker:finished processing shard 13 INFO:radosgw_agent.worker:20585 is processing shard number 14 INFO:radosgw_agent.sync:15/64 items processed INFO:radosgw_agent.worker:finished processing shard 14 INFO:radosgw_agent.worker:20585 is processing shard number 15 INFO:radosgw_agent.sync:16/64 items processed INFO:radosgw_agent.worker:finished processing shard 15 INFO:radosgw_agent.worker:20585 is processing shard number 16 INFO:radosgw_agent.sync:17/64 items processed INFO:radosgw_agent.worker:finished processing shard 16 INFO:radosgw_agent.worker:20585 is processing shard number 17 .. .. .. INFO:radosgw_agent.worker:syncing bucket "my_container" ERROR:radosgw_agent.worker:failed to sync object my_container/us.json: state is error fyi: my_container is created by me in master zone, us.json is a object in my_container. i want to sync the object us.json to zone us-west zone. -- -- ??: "Karan Singh";; : 2015??4??28??(??) 3:02 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region and zone Hi On 28 Apr 2015, at 07:12, TERRY <316828...@qq.com> wrote: Hi: all when I Configuring Federated Gateways?? I got the error as below: sudo radosgw-agent -c /etc/ceph/ceph-data-sync.conf ERROR:root:Could not retrieve region map from destination You should check that the region map is correct , especially the endpoints. Make sure firewall is not blocking the way between RGW instances. Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", line 269, in main region_map = client.get_region_map(dest_conn) File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 391, in get_region_map region_map = request(connection, 'get', 'admin/config') File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py&quo
[ceph-users] about rgw region and zone
Hi: all when I Configuring Federated Gateways?? I got the error as below: sudo radosgw-agent -c /etc/ceph/ceph-data-sync.conf ERROR:root:Could not retrieve region map from destination Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", line 269, in main region_map = client.get_region_map(dest_conn) File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 391, in get_region_map region_map = request(connection, 'get', 'admin/config') File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 155, in request check_result_status(result) File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 116, in check_result_status HttpError)(result.status_code, result.content) NotFound: Http error code 404 content {"Code":"NoSuchKey"} I have some quesions when I execute the command 1??radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-west-1 i have no idea about the option --name , what's the difference if i do it without --name config; 2??Create a Region there is a conversation near the end of doc : If you use different Ceph Storage Cluster instances for regions, you should repeat steps 2, 4 and 5 in by executing them with --nameclient.radosgw-us-west-1. You may also export the region map from the initial gateway instance and import it followed by updating the region map. I has one cluster named ceph, one region named us, and two zones: us-east?? us-west?? us-east is the master zone. I has two gateway instances??client.radosgw.us-east-1??client.radosgw.us-west-1. Do i need repeat steps 2,4,and 5? do i need export the region map from the initial gateway instance and import it___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] ?????? about rgw region sync
my access_key and secret keys are generated by the tool radosgw-admin with -gen-secret and --gen-access-key options before. I wrote down the keys and assigned it in step 5 -- -- ??: "Karan Singh";; : 2015??5??4??(??) 1:50 ??: "TERRY"<316828...@qq.com>; : Re: [ceph-users] about rgw region sync Hello , a small question In step 5 , how did you generate access_key and secret keys ?? Did you use any tool to generate it or any command ? - Karan - On 30 Apr 2015, at 08:27, TERRY <316828...@qq.com> wrote: hi: I am using the following script to setup my cluster. I upgrade my radosgw-agent from version 1.2.0 to 1.2.2-1. (1.2.0 will results a error!) cat repeat.sh #!/bin/bash set -e set -x #1 create pools sudo ./create_pools.sh #2 create a keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-west-1 --gen-key sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool -n client.radosgw.us-west-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth del client.radosgw.us-east-1 sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth del client.radosgw.us-west-1 sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring # 3 create a region sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1 set +e sudo rados -p .us.rgw.root rm region_info.default set -e sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 # try don't do it sudo radosgw-admin region set --infile us.json --name client.radosgw.us-west-1 set +e sudo rados -p .us.rgw.root rm region_info.default set -e sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-west-1 sudo radosgw-admin regionmap update --name client.radosgw.us-west-1 # 4 create zones # try chanege us-east-no-secert.json file contents sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-no-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-no-secert.json --name client.radosgw.us-west-1 sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west-no-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west-no-secert.json --name client.radosgw.us-west-1 set +e sudo rados -p .rgw.root rm zone_info.default set -e sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 # try don't do it sudo radosgw-admin regionmap update --name client.radosgw.us-west-1 #5 Create Zone Users system user sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-west-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system #6 subuser create #may create a user without --system? sudo radosgw-admin subuser create --uid="us-east" --subuser="us-east:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin subuser create --uid="us-west" --subuser="us-west:swift" --access=full --name client.radosgw.us-west-1 --key-type swift --secret="BBJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo radosgw-admin subuser create --uid="us-east" --subuser="us-east:swift" --access=full --name client.
[ceph-users] ?????? about rgw region sync
ata-sync.conf src_access_key: XNK0ST8WXTMWZGN29NF9 src_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 destination: http://WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM dest_access_key: XNK0ST8WXTMWZGN29NF9 dest_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 log_file: /var/log/radosgw/radosgw-sync-us-east-west.log there is some error as bellow?? sudo radosgw-agent -c ./ceph-data-sync.conf region map is: {u'us': [u'us-west', u'us-east']} INFO:radosgw_agent.sync:Starting sync INFO:radosgw_agent.worker:24062 is processing shard number 0 INFO:radosgw_agent.worker:finished processing shard 0 INFO:radosgw_agent.worker:24062 is processing shard number 1 INFO:radosgw_agent.sync:1/64 items processed INFO:radosgw_agent.worker:finished processing shard 1 INFO:radosgw_agent.sync:2/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 2 INFO:radosgw_agent.worker:finished processing shard 2 INFO:radosgw_agent.sync:3/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 3 INFO:radosgw_agent.worker:finished processing shard 3 INFO:radosgw_agent.sync:4/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 4 ... ... ... INFO:radosgw_agent.worker:syncing bucket "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:syncing bucket "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:finished processing shard 69 on the second cluster ??i do the follow steps?? 1??source self.env the content of self.env is : cat self.env export ST_AUTH="http://10.18.5.51/auth/1.0"; export ST_KEY=ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 export ST_USER=us-test-east:swift 2??swift list Auth GET failed: http://10.18.5.51/auth/1.0 403 Forbidden 3??radosgw-admin --name client.radosgw.us-west-1 user info --uid="us-test-east" { "user_id": "us-test-east", "display_name": "Region-US Zone-East-test", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [ { "id": "us-test-east:swift", "permissions": "full-control"}], "keys": [ { "user": "us-test-east", "access_key": "DDK0ST8WXTMWZGN29NF9", "secret_key": "DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}], "swift_keys": [ { "user": "us-test-east:swift", "secret_key": "ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1}, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1}, "temp_url_keys": []} 4??radosgw-admin --name client.radosgw.us-west-1 bucket list [ "test"] 5??radosgw-admin --name client.radosgw.us-west-1 --bucket=test bucket list [] it seems like that metadata is replicated from the first cluster?? data is not?? I don't known why? -- -- ??: "Craig Lewis";; : 2015??5??7??(??) 8:46 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region sync System users are the only ones that need to be created in both zones. Non-system users (and their sub-users) should be created in the primary zone. radosgw-agent will replicate them to the secondary zone. I didn't create sub-users for my system users, but I don't think it matters. I can read my objects from the primary and secondary zones using the same non-system user's Access and Secret. Using the S3 API, I only had to change the host name to use the DNS entries that point at the secondary cluster. eg http://bucket1.us-east.myceph.com/object and http://bucket1.us-west.myceph.com/object. It's possible that adding the non-system users to the secondary zone causes replication to fail. I would verify that users, buckets, and objects are being replicated using radosgw-admin. `radosgw-admin --name $name bucket list`, `radosgw-admin --name $name user info --uid=$username`, and `radosgw-admin --name $name --bucket=$bucket bucket list`. That will let you determine if you have a replication or an access problem. On Wed, Apr 29, 2015 at 10:27 PM, TERRY <316828...@qq.com> wrote: hi: I am using the following script to setup my cluster. I upgrad
[ceph-users] ?????? about rgw region sync
ata-sync.conf src_access_key: XNK0ST8WXTMWZGN29NF9 src_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 destination: http://WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM dest_access_key: XNK0ST8WXTMWZGN29NF9 dest_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 log_file: /var/log/radosgw/radosgw-sync-us-east-west.log there is some error as bellow?? sudo radosgw-agent -c ./ceph-data-sync.conf region map is: {u'us': [u'us-west', u'us-east']} INFO:radosgw_agent.sync:Starting sync INFO:radosgw_agent.worker:24062 is processing shard number 0 INFO:radosgw_agent.worker:finished processing shard 0 INFO:radosgw_agent.worker:24062 is processing shard number 1 INFO:radosgw_agent.sync:1/64 items processed INFO:radosgw_agent.worker:finished processing shard 1 INFO:radosgw_agent.sync:2/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 2 INFO:radosgw_agent.worker:finished processing shard 2 INFO:radosgw_agent.sync:3/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 3 INFO:radosgw_agent.worker:finished processing shard 3 INFO:radosgw_agent.sync:4/64 items processed INFO:radosgw_agent.worker:24062 is processing shard number 4 ... ... ... INFO:radosgw_agent.worker:syncing bucket "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:syncing bucket "test" ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error INFO:radosgw_agent.worker:finished processing shard 69 on the second cluster ??i do the follow steps?? 1??source self.env the content of self.env is : cat self.env export ST_AUTH="http://10.18.5.51/auth/1.0"; export ST_KEY=ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5 export ST_USER=us-test-east:swift 2??swift list Auth GET failed: http://10.18.5.51/auth/1.0 403 Forbidden 3??radosgw-admin --name client.radosgw.us-west-1 user info --uid="us-test-east" { "user_id": "us-test-east", "display_name": "Region-US Zone-East-test", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [ { "id": "us-test-east:swift", "permissions": "full-control"}], "keys": [ { "user": "us-test-east", "access_key": "DDK0ST8WXTMWZGN29NF9", "secret_key": "DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}], "swift_keys": [ { "user": "us-test-east:swift", "secret_key": "ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1}, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1}, "temp_url_keys": []} 4??radosgw-admin --name client.radosgw.us-west-1 bucket list [ "test"] 5??radosgw-admin --name client.radosgw.us-west-1 --bucket=test bucket list [] it seems like that metadata is replicated from the first cluster?? data is not?? I don't known why? -- -- ??: "Craig Lewis";; : 2015??5??7??(??) 8:46 ??: "TERRY"<316828...@qq.com>; : "ceph-users"; : Re: [ceph-users] about rgw region sync System users are the only ones that need to be created in both zones. Non-system users (and their sub-users) should be created in the primary zone. radosgw-agent will replicate them to the secondary zone. I didn't create sub-users for my system users, but I don't think it matters. I can read my objects from the primary and secondary zones using the same non-system user's Access and Secret. Using the S3 API, I only had to change the host name to use the DNS entries that point at the secondary cluster. eg http://bucket1.us-east.myceph.com/object and http://bucket1.us-west.myceph.com/object. It's possible that adding the non-system users to the secondary zone causes replication to fail. I would verify that users, buckets, and objects are being replicated using radosgw-admin. `radosgw-admin --name $name bucket list`, `radosgw-admin --name $name user info --uid=$username`, and `radosgw-admin --name $name --bucket=$bucket bucket list`. That will let you determine if you have a replication or an access problem. On Wed, Apr 29, 2015 at 10:27 PM, TERRY <316828...@qq.com> wrote: hi: I am using the following script to setup my cluster. I upgrad
[ceph-users] about rgw region sync
I build two ceph clusters. for the first cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph osd pool create .us-east.rgw.control 64 64 sudo ceph osd pool create .us-east.rgw.gc 64 64 sudo ceph osd pool create .us-east.rgw.buckets 64 64 sudo ceph osd pool create .us-east.rgw.buckets.index 64 64 sudo ceph osd pool create .us-east.rgw.buckets.extra 64 64 sudo ceph osd pool create .us-east.log 64 64 sudo ceph osd pool create .us-east.intent-log 64 64 sudo ceph osd pool create .us-east.usage 64 64 sudo ceph osd pool create .us-east.users 64 64 sudo ceph osd pool create .us-east.users.email 64 64 sudo ceph osd pool create .us-east.users.swift 64 64 sudo ceph osd pool create .us-east.users.uid 64 64 2.create a keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring 3.create a region sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1 sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 the content of us.json: cat us.json { "name": "us", "api_name": "us", "is_master": "true", "endpoints": [ "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/", "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"], "master_zone": "us-east", "zones": [ { "name": "us-east", "endpoints": [ "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/"], "log_meta": "true", "log_data": "true"}, { "name": "us-west", "endpoints": [ "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"], "log_meta": "true", "log_data": "true"}], "placement_targets": [ { "name": "default-placement", "tags": [] } ], "default_placement": "default-placement"} 4.create zones sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 cat us-east-secert.json { "domain_root": ".us-east.domain.rgw", "control_pool": ".us-east.rgw.control", "gc_pool": ".us-east.rgw.gc", "log_pool": ".us-east.log", "intent_log_pool": ".us-east.intent-log", "usage_log_pool": ".us-east.usage", "user_keys_pool": ".us-east.users", "user_email_pool": ".us-east.users.email", "user_swift_pool": ".us-east.users.swift", "user_uid_pool": ".us-east.users.uid", "system_key": { "access_key": "XNK0ST8WXTMWZGN29NF9", "secret_key": "7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".us-east.rgw.buckets.index", "data_pool": ".us-east.rgw.buckets"} } ] } #5 Create Zone Users system user sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system #6 creat zone users not system user sudo radosgw-admin user create --uid="us-test-east" --display-name="Region-US Zone-East-test" --name client.radosgw.us-east-1 --access_key="DDK0ST8WXTMWZGN29NF9" --secret="DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" #7 subuser create sudo radosgw-admin subuser create --uid="us-test-east" --subuser="us-test-east:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo /etc/init.d/ceph -a restart sudo /etc/init.d/httpd re sudo /etc/init.d/ceph-radosgw restart for the second cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-west.rgw.root 64 64 sudo ceph osd pool create .us-west.rgw.control 64 64 sudo ceph osd pool create .us-west.rgw.gc 64 64 sudo ceph osd pool create .us-west.rgw.buckets 64 64 sudo ceph osd pool create .us-west.rgw.buckets.index 64 64 sudo ceph osd pool create .us-west.rgw.buckets.extra 64 64 sudo ceph osd pool create .us-west.log 64 64 sudo ceph osd pool create .us-west.intent-log 64 64 sudo ceph osd pool create .us-west.usage 64 64 sudo ceph osd pool create .us-west.users 64 64 sudo ceph osd pool create .us-west.users.email 64 64 sudo ceph osd pool create .us-west.users.swift 64 64 sudo ceph osd pool create .us-west.users.uid 64 64 2.create a keyring sudo ceph-authtool --create-key
[ceph-users] about rgw region sync
could i build one region using two clusters?? each cluster has one zone?? so that I sync metadata and data from one cluster to another cluster?? I build two ceph clusters. for the first cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph osd pool create .us-east.rgw.control 64 64 sudo ceph osd pool create .us-east.rgw.gc 64 64 sudo ceph osd pool create .us-east.rgw.buckets 64 64 sudo ceph osd pool create .us-east.rgw.buckets.index 64 64 sudo ceph osd pool create .us-east.rgw.buckets.extra 64 64 sudo ceph osd pool create .us-east.log 64 64 sudo ceph osd pool create .us-east.intent-log 64 64 sudo ceph osd pool create .us-east.usage 64 64 sudo ceph osd pool create .us-east.users 64 64 sudo ceph osd pool create .us-east.users.email 64 64 sudo ceph osd pool create .us-east.users.swift 64 64 sudo ceph osd pool create .us-east.users.uid 64 64 2.create a keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring 3.create a region sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1 sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 the content of us.json: cat us.json { "name": "us", "api_name": "us", "is_master": "true", "endpoints": [ "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/", "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"], "master_zone": "us-east", "zones": [ { "name": "us-east", "endpoints": [ "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/"], "log_meta": "true", "log_data": "true"}, { "name": "us-west", "endpoints": [ "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"], "log_meta": "true", "log_data": "true"}], "placement_targets": [ { "name": "default-placement", "tags": [] } ], "default_placement": "default-placement"} 4.create zones sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-secert.json --name client.radosgw.us-east-1 sudo radosgw-admin regionmap update --name client.radosgw.us-east-1 cat us-east-secert.json { "domain_root": ".us-east.domain.rgw", "control_pool": ".us-east.rgw.control", "gc_pool": ".us-east.rgw.gc", "log_pool": ".us-east.log", "intent_log_pool": ".us-east.intent-log", "usage_log_pool": ".us-east.usage", "user_keys_pool": ".us-east.users", "user_email_pool": ".us-east.users.email", "user_swift_pool": ".us-east.users.swift", "user_uid_pool": ".us-east.users.uid", "system_key": { "access_key": "XNK0ST8WXTMWZGN29NF9", "secret_key": "7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".us-east.rgw.buckets.index", "data_pool": ".us-east.rgw.buckets"} } ] } #5 Create Zone Users system user sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system #6 creat zone users not system user sudo radosgw-admin user create --uid="us-test-east" --display-name="Region-US Zone-East-test" --name client.radosgw.us-east-1 --access_key="DDK0ST8WXTMWZGN29NF9" --secret="DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" #7 subuser create sudo radosgw-admin subuser create --uid="us-test-east" --subuser="us-test-east:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" sudo /etc/init.d/ceph -a restart sudo /etc/init.d/httpd re sudo /etc/init.d/ceph-radosgw restart for the second cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-west.rgw.root 64 64 sudo ceph osd pool create .us-west.rgw.control 64 64 sudo ceph osd pool create .us-west.rgw.gc 64 64 sudo ceph osd pool create .us-west.rgw.buckets 64 64 sudo ceph osd pool create .us-west.rgw.buckets.index 64 64 sudo ceph osd pool create .us-west.rgw.buckets.extra 64 64 sudo ceph osd pool create .us-west.log 64 64 sudo ceph osd pool create .us-west.intent-log 64 64 sudo ceph osd pool create .us-west.usage 64 64 sudo ceph osd pool create .us-west.users 64 64 sudo ceph osd pool create .us-west.users.email 64 64 sudo
[ceph-users] ??????Re: about rgw region sync
8a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288 2015-05-14 10:10:22.900572 7f4772f228a0 20 rados->read r=0 bl.length=451 2015-05-14 10:10:22.900593 7f4772f228a0 10 cache put: name=.us-west.rgw.root+region_map 2015-05-14 10:10:22.900596 7f4772f228a0 10 moving .us-west.rgw.root+region_map to cache LRU end 2015-05-14 10:10:22.978346 7f4772f228a0 20 generating connection object for zone us-east 2015-05-14 10:10:22.978462 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe003e8 s->prefetch_data=0 2015-05-14 10:10:22.978478 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : miss 2015-05-14 10:10:22.983234 7f4772f228a0 10 cache put: name=.us-west.users.uid+us-test-east 2015-05-14 10:10:22.983254 7f4772f228a0 10 adding .us-west.users.uid+us-test-east to cache LRU end 2015-05-14 10:10:22.983262 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty 2015-05-14 10:10:22.983271 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : type miss (requested=17, cached=22) 2015-05-14 10:10:22.983315 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe02408 s->prefetch_data=0 2015-05-14 10:10:22.983322 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : hit 2015-05-14 10:10:22.983328 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty 2015-05-14 10:10:22.983338 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe02408 s->prefetch_data=0 2015-05-14 10:10:22.983341 7f4772f228a0 20 state for obj=.us-west.users.uid:us-test-east is not atomic, not appending atomic test 2015-05-14 10:10:22.983377 7f4772f228a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288 2015-05-14 10:10:22.984442 7f4772f228a0 20 rados->read r=0 bl.length=522 2015-05-14 10:10:22.984469 7f4772f228a0 10 cache put: name=.us-west.users.uid+us-test-east 2015-05-14 10:10:22.984472 7f4772f228a0 10 moving .us-west.users.uid+us-test-east to cache LRU end ERROR: failed to sync user stats: (2) No such file or directory 2015-05-14 10:10:22.986952 7f4772f228a0 20 cls_bucket_header() returned -2 2015-05-14 10:10:22.986965 7f4772f228a0 0 ERROR: could not sync bucket stats: ret=-2 -- -- ??: "316828252";<316828...@qq.com>; : 2015??5??13??(??) 6:31 ??: "clewis"; : "ceph-users"; : ??Re: about rgw region sync please give me some advice, thanks ?? <316828...@qq.com>??2015??5??13?? 12:29?? no??i set up replication between two clusters,each cluster has one zone, both clusters are in the same region. but i got some errors. ?? Craig Lewis ??2015??5??13?? 12:02?? Are you trying to setup replication on one cluster right now? Generally replication is setup between two different clusters, each having one zone. Both clusters are in the same region. I can't think of a reason why two zones in one cluster wouldn't work. It's more complicated to setup though. Anything outside of a test setup would need a lot of planning to make sure the two zones are as fault isolated as possible. I'm pretty sure you need separate RadosGW nodes for each zone. It could be possible to share, but it will be easier if you don't. I still haven't gone through your previous logs carefully. On Tue, May 12, 2015 at 6:46 AM, TERRY <316828...@qq.com> wrote: could i build one region using two clusters?? each cluster has one zone?? so that I sync metadata and data from one cluster to another cluster?? I build two ceph clusters. for the first cluster, I do the follow steps 1.create pools sudo ceph osd pool create .us-east.rgw.root 64 64 sudo ceph osd pool create .us-east.rgw.control 64 64 sudo ceph osd pool create .us-east.rgw.gc 64 64 sudo ceph osd pool create .us-east.rgw.buckets 64 64 sudo ceph osd pool create .us-east.rgw.buckets.index 64 64 sudo ceph osd pool create .us-east.rgw.buckets.extra 64 64 sudo ceph osd pool create .us-east.log 64 64 sudo ceph osd pool create .us-east.intent-log 64 64 sudo ceph osd pool create .us-east.usage 64 64 sudo ceph osd pool create .us-east.users 64 64 sudo ceph osd pool create .us-east.users.email 64 64 sudo ceph osd pool create .us-east.users.swift 64 64 sudo ceph osd pool create .us-east.users.uid 64 64 2.create a keyring sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring 3.create a region sudo