Thanks Jonathan, points taken onboard.

I'll be testing the GossipingPropertyFileSnitch.  Just need to assess what
the impact is to a running-active cluster restarting 1 node at a time.
Given that I'm using Simple Strategy initially, I'm guessing there will be
no impact.

With regards to the non-application keyspaces,

  RowKey: system_auth
  => (column=durable_writes, value=true, timestamp=1394601216807000)
  => (column=strategy_class,
value=org.apache.cassandra.locator.SimpleStrategy,
timestamp=1394601216807000)
  => (column=strategy_options, value={"replication_factor":"1"},
timestamp=1394601216807000)
  -------------------
  RowKey: system
  => (column=durable_writes, value=true, timestamp=1394601462264001)
  => (column=strategy_class,
value=org.apache.cassandra.locator.LocalStrategy,
timestamp=1394601462264001)
  => (column=strategy_options, value={}, timestamp=1394601462264001)
  -------------------
  RowKey: system_traces
  => (column=durable_writes, value=true, timestamp=1394601462327001)
  => (column=strategy_class,
value=org.apache.cassandra.locator.SimpleStrategy,
timestamp=1394601462327001)
  => (column=strategy_options, value={"replication_factor":"1"},
timestamp=1394601462327001)

>From what I can gather,
 - system_auth only needs to be replicated if not using the allowall
authenticator.
 - system_traces replication factor cannot be changed under 1.2.x (
https://issues.apache.org/jira/browse/CASSANDRA-6016)
 - system  keyspace is set to LocalStrategy.  This should not be replicated
as it contains data relevant to the local node.

So I think if I just update the Application Keyspace, I should be okay ?

Thanks for your help.

Matt


On Tue, Mar 18, 2014 at 12:34 AM, Jonathan Lacefield <
jlacefi...@datastax.com> wrote:

> Hello,
>
>   Please see comments under your
>
>   1) Use GossipingPropertyFileSnitc:
> http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architectureSnitchGossipPF_c.html
>  -
> much easier to manage
>   2) All nodes in the same cluster must have the same cluster name:
> http://www.datastax.com/documentation/cassandra/1.2/cassandra/configuration/configCassandra_yaml_r.html
>   3)  Run repair at the very end if you would like, rebuild should take
> care of this for you.  No need to do it when you are going from Simple
> (with 1 DC) to Network (with 1 dc).  Not sure you need to do step 2
> actually.
>   4)  Yes, all Keyspaces should be updated as a part of this process.
>
>   Hope that helps.
>
> Jonathan Lacefield
> Solutions Architect, DataStax
> (404) 822 3487
> <http://www.linkedin.com/in/jlacefield>
>
>
> <http://www.datastax.com/what-we-offer/products-services/training/virtual-training>
>
>
> On Sun, Mar 16, 2014 at 10:39 PM, Matthew Allen <matthew.j.al...@gmail.com
> > wrote:
>
>> Hi all,
>>
>> New to this list, so apologies in advance if I in inadvertently break
>> some of the guidelines.
>>
>> We currently have 2 geographically separate Cassandra/Application
>> clusters (running in active/warm-standby mode), that I am looking to enable
>> replication between so that we can have an active/active configuration.
>>
>> I've got the process working in our Labs, using
>> http://www.datastax.com/documentation/cassandra/1.2/cassandra/operations/ops_add_dc_to_cluster_t.htmlas
>>  a guide, but still have many questions (to verify that what I have done
>> is correct), so I'm trying to break down my questions into various emails.
>>
>> Our Setup
>> ---------------
>> - Our replication factor is currently set to 5 in both sites (NSW and
>> VIC).  Each site has 9 nodes.
>> - We use a read/write quorum of ONE
>> - We have autoNodeDiscovery set to off in our app ( in anticipation of
>> multi-site replication), so that it only points to its local Cassandra
>> cluster
>> - The 2 sites have a 16-20ms latency
>>
>> The Plan
>> -------------
>> 1. Update and restart each node in active Cluster (NSW) 1 at a time to
>> get it to use NetworkTopologySnitch in preparation of addition of standby
>> cluster.
>>  - update cassandra-topologies.yaml file with settings as below so NSW
>> Cluster is aware of NSW only
>>  - update cassandra.yaml to use PropertyFileSnitch
>>  - restart node
>>
>>       # Cassandra Node IP=Data Center:Rack
>>     xxx.yy.zzz.144=DC_NSW:rack1
>>     xxx.yy.zzz.145=DC_NSW:rack1
>>     xxx.yy.zzz.146=DC_NSW:rack1
>>     xxx.yy.zzz.147=DC_NSW:rack1
>>     xxx.yy.zzz.148=DC_NSW:rack1
>>     ... and so forth for 9 nodes
>>
>> 2. Update App Keyspace to use NetworkTopologySnitch with {'DC_NSW':5}
>>
>> 3. Stop and blow away the standby cluster (VIC) and start afresh,
>>  - assign new tokens NSW+100
>>  - set auto_bootstrap: false
>>  - update seeds to point to mixture of VIC and NSW nodes.
>>  - update cassandra-topologies.yaml file with below so VIC Cluster is
>> aware of VIC and NSW.
>>  - Leave cassandra cluster down
>>
>>       # Cassandra Node IP=Data Center:Rack
>>     xxx.yy.zzz.144=DC_NSW:rack1
>>     xxx.yy.zzz.145=DC_NSW:rack1
>>     xxx.yy.zzz.146=DC_NSW:rack1
>>     xxx.yy.zzz.147=DC_NSW:rack1
>>     xxx.yy.zzz.148=DC_NSW:rack1
>>     ... and so forth for 9 nodes
>>
>>     aaa.bb.ccc.144=DC_VIC:rack1
>>     aaa.bb.ccc.145=DC_VIC:rack1
>>     aaa.bb.ccc.146=DC_VIC:rack1
>>     aaa.bb.ccc.147=DC_VIC:rack1
>>     aaa.bb.ccc.148=DC_VIC:rack1
>>     ... and so forth for 9 nodes
>>
>> 4. Update each node in active Cluster (NSW) 1 at a time.
>>  - update cassandra-topologies.yaml file with settings as below so NSW
>> Cluster is aware of VIC and NSW.
>>
>>       # Cassandra Node IP=Data Center:Rack
>>     xxx.yy.zzz.144=DC_NSW:rack1
>>     xxx.yy.zzz.145=DC_NSW:rack1
>>     xxx.yy.zzz.146=DC_NSW:rack1
>>     xxx.yy.zzz.147=DC_NSW:rack1
>>     xxx.yy.zzz.148=DC_NSW:rack1
>>     ... and so forth for 9 nodes
>>
>>     aaa.bb.ccc.144=DC_VIC:rack1
>>     aaa.bb.ccc.145=DC_VIC:rack1
>>     aaa.bb.ccc.146=DC_VIC:rack1
>>     aaa.bb.ccc.147=DC_VIC:rack1
>>     aaa.bb.ccc.148=DC_VIC:rack1
>>     ... and so forth for 9 nodes
>>
>> 5. Update App Keyspace to use NetworkTopologySnitch with
>> {'DC_NSW':5,'DC_VIC':5}.
>>
>> 6. Start standby cluster (VIC).
>>  - run a nodetool rebuild on each node.
>>
>> Some questions
>> -----------------------
>> - Does the Cluster Name on both clusters need to be the same ?
>> - Do I need to run a repair as part of Step 2 (after changing from Simple
>> to NetworkTopologyStrategy) ?
>> - Does the system keyspace snitch need to be updated to use
>> NetworkTopologyStrategy  as well ?  As currently in the Lab it display as
>> follows (please see 0.00% ownership below), or is this normal ?
>> - Can  the different sites run different minor versions ? 1.2.9 <->
>> 1.2.15, with a view to upgrading the other site to 1.2.15 ?
>>
>> System
>> Datacenter: DC_NSW
>> ==========
>> Address        Rack        Status State   Load
>> Owns                Token
>>
>> 0
>> xxx.yy.zzz.65  rack1       Up     Normal  433.42 KB
>> 50.00%              -9223372036854775808
>> xxx.yy.zzz.66  rack1       Up     Normal  459.3 KB
>> 50.00%              0
>>
>>
>> Datacenter: DC_VIC
>> ==========
>> Address        Rack        Status State   Load
>> Owns                Token
>>
>> 100
>> aaa.bb.ccc.65  rack1       Up     Normal  429.34 KB
>> 0.00%               -9223372036854775708
>> aaa.bb.ccc.66  rack1       Up     Normal  391.3 KB
>> 0.00%               100
>>
>> Thanks
>>
>> Matt
>>
>
>

Reply via email to