On Sun, Mar 11, 2018 at 10:31 PM, Kunal Gangakhedkar < kgangakhed...@gmail.com> wrote:
> Hi all, > > We currently have a cluster in GCE for one of the customers. > They want it to be migrated to AWS. > > I have setup one node in AWS to join into the cluster by following: > https://docs.datastax.com/en/cassandra/2.1/cassandra/ > operations/ops_add_dc_to_cluster_t.html > > Will add more nodes once the first one joins successfully. > > The node in AWS has an elastic IP - which is white-listed for ports > 7000-7001, 7199, 9042 in GCE firewall. > > The snitch is set to GossipingPropertyFileSnitch. The GCE setup has > dc=DC1, rack=RAC1 while on AWS, I changed the DC to dc=DC2. > > When I start cassandra service on the AWS instance, I see the version > handshake msgs in the logs trying to connect to the public IPs of the GCE > nodes: > OutboundTcpConnection.java:496 - Handshaking version with /xx.xx.xx.xx > > However, nodetool status output on both sides don't show the other side at > all. That is, the GCE setup doesn't show the new DC (dc=DC2) and the AWS > setup doesn't show old DC (dc=DC1). > > In cassandra.yaml file, I'm only using listen_interface and rpc_interface > settings - no explicit IP addresses used - so, ends up using the internal > private IP ranges. > > Do I need to explicitly add the broadcast_address? > On the AWS side you could use EC2MultiRegionSnitch: it will assign the appropriate address (Elastic IP) to this, as well as set DC and rack from the EC2 Availability Zone. > for both side? > I would expect that you have to specify proper broadcast_address on the GCE side as well. > Would that require restarting of cassandra service on GCE side? Or is it > possible to change that setting on-the-fly without a restart? > A restart is required AFAIK. -- Alex