On 2017-03-08 07:57 (-0800), Chuck Reynolds <creyno...@ancestry.com> wrote: 
> I was hoping I could do the following
> 
> ·         Change seeds

Definitely. 

> 
> ·         Change the topology back to simply
> 

Not necessary, can just remove the "other" datacenter from the replication 
strategy.

> ·         Stop nodes in datacenter 2
> 
> ·         Remove nodes in datacenter 2
> 

>From where? system.peers? 

> ·         Restart nodes in datacenter 2

Then dc2 connects back to dc1 and you end up messed up again. BOTH SIDES are 
going to try to reconnect, and it's going to be ugly.

If you used an internode authenticator, you could make that lock out the other 
cluster.
If you used a property file snitch (NOT GPFS, plain old PFS), you could remove 
the other datacenter from each topology file.
You can use a firewall.

You'll defintiely need to change seeds, and probably need to stop most of the 
nodes all at once to enable this.

> 
> Somehow Cassandra holds on to the information about who was in the cluster.
> 

system.peers keeps a list of all members in the cluster. System is local 
strategy (not replicated), so each node has its own copy, so IF you try hacking 
at it, you MUST hack at it on all nodes (basically at the same time, because 
they'll repopulate via gossip).

> What if I also changed the cluster name in the Cassandra.yaml before 
> restarting?
> 

Changing the name is very difficult (decidedly nontrivial).


Given that you need to stop all the nodes to do this (downtime), I'd be pretty 
tempted to tell you to nuke one of the clusters, and use sstableloader to 
repopulate it as a brand new cluster after you've nuked it. That is - Don't try 
to split a cluster in half, create it, then kill half, then make a new cluster, 
and use sstableloader to repopulate it quickly.


Reply via email to