RE: Looking for a fully working AWS multi DC configuration.

2013-06-13 Thread Dan Kogan
, June 05, 2013 5:45 PM To: user@cassandra.apache.org Subject: Re: Looking for a fully working AWS multi DC configuration. Do you open all these nodes one by one on every Security Group in each region every time you add a node or did you manage to automate it somehow ? 2013/6/5 Dan Kogan mailto:d

RE: Looking for a fully working AWS multi DC configuration.

2013-06-05 Thread Dan Kogan
Hi, We are using a very similar configuration. From our experience, Cassandra nodes in the same DC need access over both public and private IP on the storage port (7000/7001). Nodes from other DC will need access over public IP on the storage port. All Cassandra nodes also need access over th

RE: Node went down and came back up

2013-05-06 Thread Dan Kogan
, 2013 at 6:20 AM, Dan Kogan wrote: > It seems that we did not have the JMX ports (1024+) opened in our firewall. > Once we opened ports 1024+ the hinted handoffs completed and it seems that > the cluster went back to normal. > Does that make sense? No, JMX should not be require

RE: Node went down and came back up

2013-05-06 Thread Dan Kogan
ntsColumnFamily/system-HintsColumnFamily-he-9-Data.db')] INFO [HintedHandoff:1] 2013-05-05 14:52:43,419 HintedHandOffManager.java (line 390) Finished hinted handoff of 7945 rows to endpoint /107.20.45.6 -Original Message- From: Dan Kogan [mailto:d...@iqtell.com] Sent: Sunday, May 05, 2

Node went down and came back up

2013-05-05 Thread Dan Kogan
ts to /67.202.15.178; aborting further deliveries INFO [HintedHandoff:1] 2013-05-05 11:22:43,348 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /67.202.15.178 Do we need to run repair on all nodes to get the cluster back to "normal" state? Thanks for the help. Dan Kogan