In my case, there were authentication issues when adding data centers.

I was using a PasswordAuthenticator.

As soon as the datacenter was added, the following authentication error log was 
recorded on the client log file.

com.datastax.driver.core.exceptions.AuthenticationException: Authentication 
error on host /xxx.xxx.xxx.xx:9042: Provided username apm and/or password are 
incorrect

I was using DCAwareRoundRobinPolicy, but I guess it's probably because of the 
withUsedHostsPerRemoteDc option.

I took several steps and the error log disappeared. It is probably ’nodetool 
rebuild' after altering the system_auth table.

However, the procedure was not clearly defined.


> On 18 Sep 2018, at 2:40 AM, Pradeep Chhetri <[email protected]> wrote:
> 
> Hello Alain,
> 
> Thank you very much for reviewing it. You answer on seed nodes cleared my 
> doubts. I will update it as per your suggestion.
> 
> I have few followup questions on decommissioning of datacenter:
> 
> - Do i need to run nodetool repair -full on each of the nodes (old + new dc 
> nodes) before starting the decommissioning process of old dc.
> - We have around 15 apps using cassandra cluster. I want to make sure that 
> all queries before starting the new datacenter are going with right 
> consistency level i.e LOCAL_QUORUM instead of QUORUM. Is there a way i can 
> log the consistency level of each query somehow in some log file.
> 
> Regards,
> Pradeep
> 
> On Mon, Sep 17, 2018 at 9:26 PM, Alain RODRIGUEZ <[email protected] 
> <mailto:[email protected]>> wrote:
> Hello Pradeep,
> 
> It looks good to me and it's a cool runbook for you to follow and for others 
> to reuse.
> 
> To make sure that cassandra nodes in one datacenter can see the nodes of the 
> other datacenter, add the seed node of the new datacenter in any of the old 
> datacenter’s nodes and restart that node.
> 
> Nodes seeing each other from the distinct rack is not related to seeds. It's 
> indeed recommended to use seeds from all the datacenter (a couple or 3). I 
> guess it's to increase availability on seeds node and/or maybe to make sure 
> local seeds are available.
> 
> You can perfectly (and even have to) add your second datacenter nodes using 
> seeds from the first data center. A bootstrapping node should never be in the 
> list of seeds unless it's the first node of the cluster. Add nodes, then make 
> them seeds.
> 
> 
> Le lun. 17 sept. 2018 à 11:25, Pradeep Chhetri <[email protected] 
> <mailto:[email protected]>> a écrit :
> Hello everyone,
> 
> Can someone please help me in validating the steps i am following to migrate 
> cassandra snitch.
> 
> Regards,
> Pradeep
> 
> On Wed, Sep 12, 2018 at 1:38 PM, Pradeep Chhetri <[email protected] 
> <mailto:[email protected]>> wrote:
> Hello
> 
> I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I was 
> testing the process to migrate to GPFS using AWS region as the datacenter 
> name and AWS zone as the rack name in my preprod environment and was able to 
> achieve it. 
> 
> But before decommissioning the older datacenter, I want to verify that the 
> data in newer dc is in consistence with the one in older dc. Is there any 
> easy way to do that. 
> 
> Do you suggest running a full repair before decommissioning the nodes of 
> older datacenter ?
> 
> I am using the steps documented here: https://medium.com/p/465e9bf28d99 
> <https://medium.com/p/465e9bf28d99> I will be very happy if someone can 
> confirm me that i am doing the right steps.
> 
> Regards,
> Pradeep
> 
> 
> 
> 
> 
> 

Reply via email to