The nodes in DC1 need to be able to reach the nodes in DC2 on the public
(NAT'd) IP.
Others may be able to provide some more details .
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 27/06/2012, at 9:51 PM, Andras Szerdahelyi wrote:
>
Aaron,
The broadcast_address allows a node to broadcast an address that is different
to the ones it's bound to on the local interfaces
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L270
Yes and thats not where the problem is IMO.. If you broadcast your translated
address (
> Setting up a Cassandra ring across NAT ( without a VPN ) is impossible in my
> experience.
The broadcast_address allows a node to broadcast an address that is different
to the ones it's bound to on the local interfaces
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L270
Hi Andras,
I am not using a VPN. The system has been running successfully in this
configuration for a couple of weeks until I noticed the repair is not
working.
What happens is that I configure the IP Tables of the machine on each
Cassandra node to forward packets that are sent to any of the IPs
The DCs are communicating over a gateway where I do NAT for ports 7000, 9160
and 7199.
Ah, that sounds familiar. You don't mention if you are VPN'd or not. I'll
assume you are not.
So, your nodes are behind network address translation - is that to say they
advertise ( broadcast ) their inter
Hello everyone,
I have a 2 DC (DC1:3 and DC2:6) Cassandra1.0.7 setup. I have about
300GB/node in the DC2.
The DCs are communicating over a gateway where I do NAT for ports 7000,
9160 and 7199.
I did a "nodetool repair" on a node in DC2 without any external load on
the system.
It took 5 hrs