Hello Logan and All - 
I am interested in remote replication between two ceph clusters not using 
federated radosgw setup. Something like ceph osd from one to ceph osd of 
another cluster. Any thoughts on how to accomplish this?
Thanks,Lakshmi. 

     On Wednesday, January 7, 2015 5:21 PM, Logan Barfield 
<lbarfi...@tqhosting.com> wrote:
   

 Hello,

I'm re-sending this message since I didn't see it picked up on the list 
archives yesterday.  My apologies if it was received previously.
We are currently running a single datacenter Ceph deployment.  Our setup is as 
follows:- 4 HDD OSD nodes (primarily used for RadosGW/Object Storage)- 2 SSD 
OSD nodes (used for RBD/VM block devices)- 3 Monitor daemons running on 3 of 
the HDD OSD nodes- The CRUSH rules are set to push all data to the HDD nodes 
except for the RBD pool, which uses the SSD nodes.
Our goal is to have OSD nodes in 3 datacenters (US East, US West, Europe).  I'm 
thinking that we would want the following setup:- RadosGW instance in each 
datacenter with geo-dns to direct clients to the closest one.- Same OSD 
configuration as our current location (HDD for RadosGW, SSD for RBD)- Separate 
RBD pool in each datacenter for VM block devices.- CRUSH rules:-> RadosGW: 3 
replicas, different OSD nodes, at least 1 off-site (e.g., 2 replicas on 2 OSD 
nodes in one datacenter, 1 replica on 1 OSD node in a different datacenter).  I 
don't know if RadosGW is geo-aware enough to do this efficiently-> RBD: 2 
replicas across 2 OSD nodes in the same datacenter.
>From the documentation it looks like the best way to accomplish this would be 
>to have a separate cluster in each datacenter, then use a federated RadosGW 
>configuration to keep geo-redundant replicas of objects.  The other option 
>would be to have one cluster spanning all 3 locations, but since they would be 
>connected over VPN/WAN links that doesn't seem ideal.
Concerns:- With a federated configuration it looks like only one zone will be 
writable, so if the master zone is on the east coast all of the west coast 
clients would be uploading there as well.- It doesn't appear that there is a 
way to only have 1 replica sent to the secondary zone, rather all data written 
to the master is replicated to the secondary (e.g., 3 replicas in each 
location).  Alternatively with multiple regions both zones would be read/write, 
but only metadata would be synced.- From the documentation I understand that 
there should be different pools for each zone, and each cluster will need to 
have a different name.  Since our current cluster is in production I don't know 
how safe it would be to rename/move pools, or re-name the cluster.  We are 
using the default "ceph" cluster name right now because different names add 
complexity (e.g, requiring '--cluster' for all commands), and we noticed in 
testing that some of the init scripts don't play well with custom cluster names.
It would seem to me that having a federated configuration would add a lot of 
complexity. It wouldn't get us exactly what we'd like for replication (one 
offsite copy), and doesn't allow for geo-aware writes.
I've seen a few examples of CRUSH maps that span multiple datacenters.  This 
would seem to be an easier setup, and would get us closer to what we want with 
replication.  My only concern would be the WAN latency, setting up site-to-site 
VPN (which I don't think is necessary for the federated setup), and how well 
Ceph would handle losing a connection to one of the remote sites for a few 
seconds or minutes.
Is there a recommended deployment for what we want to do, or any reference 
guides beyond the official Ceph docs?  I know Ceph is being used for multi-site 
deployments, but other than a few blog posts demonstrating theoretical setups 
and vague Powerpoint slides I haven't seen any details on it.  Unfortunately we 
are a very small company, so consulting with Inktank/RedHat isn't financially 
feasible right now.
Any suggestions/insight would be much appreciated.

Thank You,
Logan BarfieldTranquil Hosting

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


   
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to