Hi all, New to this list, so apologies in advance if I in inadvertently break some of the guidelines.
We currently have 2 geographically separate Cassandra/Application clusters (running in active/warm-standby mode), that I am looking to enable replication between so that we can have an active/active configuration. I've got the process working in our Labs, using http://www.datastax.com/documentation/cassandra/1.2/cassandra/operations/ops_add_dc_to_cluster_t.htmlas a guide, but still have many questions (to verify that what I have done is correct), so I'm trying to break down my questions into various emails. Our Setup --------------- - Our replication factor is currently set to 5 in both sites (NSW and VIC). Each site has 9 nodes. - We use a read/write quorum of ONE - We have autoNodeDiscovery set to off in our app ( in anticipation of multi-site replication), so that it only points to its local Cassandra cluster - The 2 sites have a 16-20ms latency The Plan ------------- 1. Update and restart each node in active Cluster (NSW) 1 at a time to get it to use NetworkTopologySnitch in preparation of addition of standby cluster. - update cassandra-topologies.yaml file with settings as below so NSW Cluster is aware of NSW only - update cassandra.yaml to use PropertyFileSnitch - restart node # Cassandra Node IP=Data Center:Rack xxx.yy.zzz.144=DC_NSW:rack1 xxx.yy.zzz.145=DC_NSW:rack1 xxx.yy.zzz.146=DC_NSW:rack1 xxx.yy.zzz.147=DC_NSW:rack1 xxx.yy.zzz.148=DC_NSW:rack1 ... and so forth for 9 nodes 2. Update App Keyspace to use NetworkTopologySnitch with {'DC_NSW':5} 3. Stop and blow away the standby cluster (VIC) and start afresh, - assign new tokens NSW+100 - set auto_bootstrap: false - update seeds to point to mixture of VIC and NSW nodes. - update cassandra-topologies.yaml file with below so VIC Cluster is aware of VIC and NSW. - Leave cassandra cluster down # Cassandra Node IP=Data Center:Rack xxx.yy.zzz.144=DC_NSW:rack1 xxx.yy.zzz.145=DC_NSW:rack1 xxx.yy.zzz.146=DC_NSW:rack1 xxx.yy.zzz.147=DC_NSW:rack1 xxx.yy.zzz.148=DC_NSW:rack1 ... and so forth for 9 nodes aaa.bb.ccc.144=DC_VIC:rack1 aaa.bb.ccc.145=DC_VIC:rack1 aaa.bb.ccc.146=DC_VIC:rack1 aaa.bb.ccc.147=DC_VIC:rack1 aaa.bb.ccc.148=DC_VIC:rack1 ... and so forth for 9 nodes 4. Update each node in active Cluster (NSW) 1 at a time. - update cassandra-topologies.yaml file with settings as below so NSW Cluster is aware of VIC and NSW. # Cassandra Node IP=Data Center:Rack xxx.yy.zzz.144=DC_NSW:rack1 xxx.yy.zzz.145=DC_NSW:rack1 xxx.yy.zzz.146=DC_NSW:rack1 xxx.yy.zzz.147=DC_NSW:rack1 xxx.yy.zzz.148=DC_NSW:rack1 ... and so forth for 9 nodes aaa.bb.ccc.144=DC_VIC:rack1 aaa.bb.ccc.145=DC_VIC:rack1 aaa.bb.ccc.146=DC_VIC:rack1 aaa.bb.ccc.147=DC_VIC:rack1 aaa.bb.ccc.148=DC_VIC:rack1 ... and so forth for 9 nodes 5. Update App Keyspace to use NetworkTopologySnitch with {'DC_NSW':5,'DC_VIC':5}. 6. Start standby cluster (VIC). - run a nodetool rebuild on each node. Some questions ----------------------- - Does the Cluster Name on both clusters need to be the same ? - Do I need to run a repair as part of Step 2 (after changing from Simple to NetworkTopologyStrategy) ? - Does the system keyspace snitch need to be updated to use NetworkTopologyStrategy as well ? As currently in the Lab it display as follows (please see 0.00% ownership below), or is this normal ? - Can the different sites run different minor versions ? 1.2.9 <-> 1.2.15, with a view to upgrading the other site to 1.2.15 ? System Datacenter: DC_NSW ========== Address Rack Status State Load Owns Token 0 xxx.yy.zzz.65 rack1 Up Normal 433.42 KB 50.00% -9223372036854775808 xxx.yy.zzz.66 rack1 Up Normal 459.3 KB 50.00% 0 Datacenter: DC_VIC ========== Address Rack Status State Load Owns Token 100 aaa.bb.ccc.65 rack1 Up Normal 429.34 KB 0.00% -9223372036854775708 aaa.bb.ccc.66 rack1 Up Normal 391.3 KB 0.00% 100 Thanks Matt