Hi Pradeep,

For changing the snitch you have decommission and add the node with new switch 
and updated properties files. 

Thanks,


> On Aug 26, 2018, at 2:15 PM, Joshua Galbraith 
> <jgalbra...@newrelic.com.INVALID> wrote:
> 
> Pradeep,
> 
> Here are some related tickets that may also be helpful in understanding the 
> current behavior of these options.
> 
> * https://issues.apache.org/jira/browse/CASSANDRA-5897 
> <https://issues.apache.org/jira/browse/CASSANDRA-5897>
> * https://issues.apache.org/jira/browse/CASSANDRA-9474 
> <https://issues.apache.org/jira/browse/CASSANDRA-9474>
> * https://issues.apache.org/jira/browse/CASSANDRA-10243 
> <https://issues.apache.org/jira/browse/CASSANDRA-10243>
> * https://issues.apache.org/jira/browse/CASSANDRA-10242 
> <https://issues.apache.org/jira/browse/CASSANDRA-10242>
> 
> On Sun, Aug 26, 2018 at 1:20 PM, Joshua Galbraith <jgalbra...@newrelic.com 
> <mailto:jgalbra...@newrelic.com>> wrote:
> Pradeep,
> 
> That being said, I haven't experimented with -Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true before.
> 
> The description here may be helpful:
> https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L685-L693 
> <https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L685-L693>
> 
> I would spin up a small test cluster with data you don't care about and 
> verify that your above assumptions are correct there first.
> 
> On Sun, Aug 26, 2018 at 1:09 PM, Joshua Galbraith <jgalbra...@newrelic.com 
> <mailto:jgalbra...@newrelic.com>> wrote:
> Pradeep.
> 
> Right, so from that documentation is sounds like you actually have to stop 
> all nodes in the cluster at once and bring them back up one at a time. A 
> rolling restart won't work here.
> 
> On Sun, Aug 26, 2018 at 11:46 AM, Pradeep Chhetri <prad...@stashaway.com 
> <mailto:prad...@stashaway.com>> wrote:
> Hi Joshua,
> 
> Thank you for the reply. Sorry i forgot to mention that I already went 
> through that documentation. There are few missing things regarding which I 
> have few questions:
> 
> 1) One thing which isn't mentioned there is that cassandra fails to restart 
> when we change the datacenter name or rack name of a node. So whether should 
> i first rolling restart cassandra with flag "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true", then run sequential repair and then cleanup 
> and then rolling restart cassandra without that flag.
> 
> 2) Should i not allow any read/write operation from applications during the 
> time when sequential repair is running.
> 
> Regards,
> Pradeep
> 
> On Mon, Aug 27, 2018 at 12:19 AM, Joshua Galbraith 
> <jgalbra...@newrelic.com.invalid <mailto:jgalbra...@newrelic.com.invalid>> 
> wrote:
> Pradeep, it sounds like what you're proposing counts as a topology change 
> because you are changing the datacenter name and rack name.
> 
> Please refer to the documentation here about what to do in that situation:
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html
>  
> <https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html>
> 
> In particular:
> 
> Simply altering the snitch and replication to move some nodes to a new 
> datacenter will result in data being replicated incorrectly.
> 
> Topology changes may occur when the replicas are placed in different places 
> by the new snitch. Specifically, the replication strategy places the replicas 
> based on the information provided by the new snitch.
> 
> If the topology of the network has changed, but no datacenters are added:
> a. Shut down all the nodes, then restart them.
> b. Run a sequential repair and nodetool cleanup on each node.
> 
> On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri <prad...@stashaway.com 
> <mailto:prad...@stashaway.com>> wrote:
> Hello everyone,
> 
> Since i didn't hear from anyone, just want to describe my question again:
> 
> Am i correct in understanding that i need to do following steps to migrate 
> data from SimpleSnitch to GPFS changing datacenter name and rack name to AWS 
> region and Availability zone respectively
> 
> 1) Update the rack and datacenter fields in cassandra-rackdc.properties file 
> and rolling restart cassandra with this flag "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true"
> 
> 2) Run nodetool repair --sequential and nodetool cleanup.
> 
> 3) Rolling restart cassandra removing the flag  "-Dcassandra.ignore_dc=true 
> -Dcassandra.ignore_rack=true"
> 
> Regards,
> Pradeep
> 
> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri <prad...@stashaway.com 
> <mailto:prad...@stashaway.com>> wrote:
> Hello,
> 
> I am currently running a 3.11.2 cluster in SimpleSnitch hence the datacenter 
> is datacenter1 and rack is rack1 for all nodes on AWS. I want to switch to 
> GPFS by changing the rack name to the availability-zone name and datacenter 
> name to region name.
> 
> When I try to restart individual nodes by changing those values, it failed to 
> start throwing the error about dc and rack name mismatch but gives me an 
> option to set ignore_dc and ignore_rack to true to bypass it.
> 
> I am not sure if it is safe to set those two flags to true and if there is 
> any drawback now or in future when i add a new datacenter to the cluster. I 
> went through the documentation on Switching Snitches but didn't get much 
> explanation.
> 
> Regards,
> Pradeep
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> Joshua Galbraith | Lead Software Engineer | New Relic
> 
> 
> 
> 
> -- 
> Joshua Galbraith | Lead Software Engineer | New Relic
> 
> 
> 
> -- 
> Joshua Galbraith | Lead Software Engineer | New Relic
> 
> 
> 
> -- 
> Joshua Galbraith | Lead Software Engineer | New Relic

Reply via email to