Now I can configure it but it seems make no sense.
The following is the Error info.


 [root@ceph69 ceph]# radosgw-agent -c /etc/ceph/cluster-data-sync.conf
INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/radosgw_agent/cli.py", line 269, in 
main
    region_map = client.get_region_map(dest_conn)
  File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 391, in 
get_region_map
    region_map = request(connection, 'get', 'admin/config')
  File "/usr/lib/python2.6/site-packages/radosgw_agent/client.py", line 153, in 
request
    result = handler(url, params=params, headers=request.headers, data=data)
  File "/usr/lib/python2.6/site-packages/requests/api.py", line 55, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python2.6/site-packages/requests/api.py", line 44, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 279, in 
request
    resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, 
cert=cert, proxies=proxies)
  File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 374, in 
send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python2.6/site-packages/requests/adapters.py", line 213, in 
send
    raise SSLError(e)
SSLError: hostname 's3.ceph71.com' doesn't match u'ceph71' 


What's the probably reason?
Thanks!




At 2014-04-09 16:24:48,wsnote <wsn...@163.com> wrote:

Thank you very much!
I did as what you said. But there are some mistake.


 [root@ceph69 ceph]# radosgw-agent -c region-data-sync.conf
Traceback (most recent call last):
  File "/usr/bin/radosgw-agent", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2659, in 
<module>
    parse_requirements(__requires__), Environment()
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: requests>=1.2.1 





At 2014-04-09 12:11:09,"Craig Lewis" <cle...@centraldesktop.com> wrote:
I posted inline.


1. Create Pools
there are many us-east and us-west pools.
Do I have to create both us-east and us-west pools in a ceph instance? Or, I 
just create us-east pools in us-east zone and create us-west pools in us-west 
zone?

No, just create the us-east pools in the us-east cluster, and the us-west pools 
in the us-west cluster.




2. Create a keyring

Generate a Ceph Object Gateway user name and key for each instance.

sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-east-1 --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-west-1 --gen-key
Do I use the all above commands in every ceph instance, or use first in us-east 
zone and use second in us-west zone?

For the keyrings, you should only need to do the key in the respective zone.  
I'm not 100% sure though, as I'm not using CephX.





3. add instances to ceph config file
[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}


Does both of above config put in one ceph.conf, or put us-east in us-east zone 
and us-west in us-west zone?

It only needs to be in each cluster's ceph.conf.  Assuming your client names 
are globally unique., it won't hurt if you put it in all cluster's ceph.conf. 




4. Create Zones
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-east-1
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-west-1
Use both commands in every instance or separately?

Yes, the zones need to know about each other.  The slaves definitely need to 
know the master zone information.  The master might be able to get away with 
not knowing about the slave zones, but I haven't tested it.  I ran both 
commands in both zones, using the respective --name argument for the node in 
the zone I was running the command on.




5. Create Zone Users


radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" 
--name client.radosgw.us-east-1 --system
radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" 
--name client.radosgw.us-west-1 --system
Does us-east zone have to create uid us-west?
Does us-west zone have to create uid us-east?

When you create the system users, you do need to create all users in all zone.  
I think you don't need the master user in the slave zones, but I haven't taken 
the time to test it.  You do need the access keys to match in all zones.  So if 
you create the users in the master zone with

radosgw-admin user create --uid="$name" --display-name="$display_name" --name 
client.radosgw.us-west-1 --gen-access-key --gen-secret --system

you'll copy the access and secret keys to the slave zone with

radosgw-admin user create --uid="$name" --display-name="$display_name" --name 
client.radosgw.us-east-1 --access_key="$access_key" --secret="$secret_key" 
--system




6. about secondary region


Create zones from master region in the secondary region.
Create zones from secondary region in the master region.


Do these two steps aim at that the two regions have the same pool?

I haven't tried multiple regions yet, but since the two regions are in two 
different clusters, they can't share pools.  They could use the same pool names 
in different clusters, but I recommend against that.  You really want all pools 
in all locations to be named uniquely.  Having the same names in different 
locations is a recipe for human error.

I'm pretty sure you just need to load the region and zone maps in all of the 
clusters.  Since the other regions will only be storing metadata about the 
other regions and zones, they shouldn't need extra pools.  Similar to my answer 
to question #1.




The best advice I can give is to setup a pair of virtual machines, and start 
messing around.  Make liberal use of VM snapshots.  I broke my test clusters 
several times.  I could've fixed them, but it was easier to revert.  I followed 
the instructions, and it still took me several days and several reverts to get 
a working test setup.



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to