Hi all,

We currently have a cluster in GCE for one of the customers.
They want it to be migrated to AWS.

I have setup one node in AWS to join into the cluster by following:
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html

Will add more nodes once the first one joins successfully.

The node in AWS has an elastic IP - which is white-listed for ports
7000-7001, 7199, 9042 in GCE firewall.

The snitch is set to GossipingPropertyFileSnitch. The GCE setup has dc=DC1,
rack=RAC1 while on AWS, I changed the DC to dc=DC2.

When I start cassandra service on the AWS instance, I see the version
handshake msgs in the logs trying to connect to the public IPs of the GCE
nodes:
    OutboundTcpConnection.java:496 - Handshaking version with /xx.xx.xx.xx

However, nodetool status output on both sides don't show the other side at
all. That is, the GCE setup doesn't show the new DC (dc=DC2) and the AWS
setup doesn't show old DC (dc=DC1).

In cassandra.yaml file, I'm only using listen_interface and rpc_interface
settings - no explicit IP addresses used - so, ends up using the internal
private IP ranges.

Do I need to explicitly add the broadcast_address? for both side?
Would that require restarting of cassandra service on GCE side? Or is it
possible to change that setting on-the-fly without a restart?

I would prefer a non-restart option.

PS: The cassandra version running in GCE is 2.1.18 while the new node setup
in AWS is running 2.1.20 - just in case if that's relevant.

Thanks,
Kunal

Reply via email to