Hey Renato, As far as I can tell, the reason you're getting private IP addresses back is that the node you're connecting to is relaying back the way that _it_ knows where to find other nodes, which is a function of the gossip state. This is expected behavior.
Mixed Private/Public IP spaces without full connectivity between both sets of address spaces is always going to be a pain, IMHO; you're much better off standardizing on one or the other. It sounds like you have a machine (maybe a dev machine?) outside of EC2 trying to reach your cluster. If this is just for development, then the "easiest" mechanism would be to standardize on public IPs; elastic IP's are free as long as they're in use, so I would just request a quota increase. If this is a production configuration (maybe another datacenter?) you'll probably want to investigate a more robust routing solution. We have two datacenters with distinct, non-intersecting Private IP spaces; we use a VPN to route between them. Clients on both sides of the tunnel can natively speak the private IP's on the other side, which eliminates odd issues from NAT. From what I've seen on the list, this is a somewhat common configuration. Hope this helps! --Bryan On Wed, Sep 30, 2015 at 7:24 AM, Renato Perini <renato.per...@gmail.com> wrote: > Hello! > I have configured a small cluster composed of three nodes on Amazon EC2. > The 3 machines don't have an elastic IP (static address) so the public > address changes at every reboot. > > I have a machine with a static ip that I use as a bridge to access the > other 3 cassandra nodes through SSH. On this machine, I have setup a > tunnelling towards the first node of the cluster in order to open the 9042 > port and let me access the cluster through this static IP. > > Basically, my cassandra.yaml has these settings: > listen_address: private IP > broadcast_address: commented out. > rpc_address: 0.0.0.0 > broadcast_rpc_address: private ip > > I know I should set the broadcast address to the public IP, but it is > dynamic and I don't have any idea at the moment on how I could determine it > and setup it in the cassandra.yaml file. > > I'm developing a small client using the datastax connector (in Java). > I setup the contactpoint using the public ip of the bridge machine. The > client connects but gives some errors while adding other nodes in the > cluster: > > 15:43:26,887 ERROR [com.datastax.driver.core.Session] > (cluster1-nio-worker-1) Error creating pool to /XXX.XX.XX.XXX:9042: > com.datastax.driver.core.TransportException: [/XXX.XX.XX.XXX:9042] Cannot > connect > at > com.datastax.driver.core.Connection$1.operationComplete(Connection.java:156) > [cassandra-driver-core-2.2.0-rc3.jar:] > at > com.datastax.driver.core.Connection$1.operationComplete(Connection.java:139) > [cassandra-driver-core-2.2.0-rc3.jar:] > at > io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80] > Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: > /XXX.XX.XX.XXX:9042 > at > io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > ... 6 more > > 15:43:26,887 ERROR [com.datastax.driver.core.Session] > (cluster1-nio-worker-3) Error creating pool to /XXX.XX.XX.XX:9042: > com.datastax.driver.core.TransportException: [/XXX.XX.XX.XX:9042] Cannot > connect > at > com.datastax.driver.core.Connection$1.operationComplete(Connection.java:156) > [cassandra-driver-core-2.2.0-rc3.jar:] > at > com.datastax.driver.core.Connection$1.operationComplete(Connection.java:139) > [cassandra-driver-core-2.2.0-rc3.jar:] > at > io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80] > Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: > /XXX.XX.XX.XX:9042 > at > io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > ... 6 more > > This because the driver resolves local ip address for the other nodes, I > think. > So, how can I solve this problem? >