Hi

I have a cluster of 3 nodes running Cassandra v1.2 with num_tokens set to 256. 
It's running on EC2. When I installed the cluster, I took up one node with seed 
set to it's own IP. The next 2 had the first one as seed. A 'nodetool status' 
shows all 3 nodes up and running. Replicationfactor is 3.

Now I modify the seed for all 3 nodes, to contain all 3 nodes, so any one of 
them, can see the other in case of restart.

Now, one of the nodes dies, and when I bring it back up, it does'nt join the 
cluster again, but becomes it own node/cluster. I can't get it to join the 
cluster again, even after doing 'removenode' and clearing all data.

I decide to terminate the node and launch a new one. This new one acts a bit 
weird as well. It has the 2 remaining nodes as seeds. When I do a 'status' it 
only shows the 2 live nodes (same status on all 3 nodes), but I can see from 
'netstats' that's it joining and getting data, but only from 1 one. When it's 
done streaming data, it shows up correct in a 'status'.

Then I start to look into the system.peers table and something doesn't seem 
right.

Node A has the other 2 nodes listed, but no tokens.
Node B has the other 2 nodes listed, but only tokens on one of them.
Node C has the other 2 nodes listed, and tokens for both of them.

Furthermore, after I replace the failed node, it still remains in the 
system.peers table, with no tokens.

So my questions are:

1. Is this the correct way to boot/maintain a cluster?
2. Isn't the old node supposed to be removed from system.peers, when I do a 
'removenode'?
3. Shouldn't the system.peers-table be alike on the 3 nodes, eg. A has B & C, B 
has A & B and C has A & B ?

-- 
Sincerely,

Nicolai Gylling
DevOps Engineer
www.issuu.com

A Time.com 'Best Website'
blog.issuu.com | twitter.com/issuu


Reply via email to