Hi,

I'm running Cassandra (0.7.4) on a 4 node ring.  It was a 3 node ring, but we 
ended up expanding it to 4... So then I followed the many suggestions to 
rebalance the ring.  I found a script that suggested I use:

# ~/nodes_calc.py 
How many nodes are in your cluster? 4
node 0: 0
node 1: 42535295865117307932921825928971026432
node 2: 85070591730234615865843651857942052864
node 3: 127605887595351923798765477786913079296

So I started to migrate each node to those tokens.

I have my replication factor set to 2, so I guess I was expecting to be able to 
continue to use this ring without issues.  But it seems that the node still 
accepts writes while it's decommissioning?  I say this because if I interrupt 
the decommission by stopping Cassandra and starting it again, it appears to run 
through several commit logs.  And as soon as it's through with those commit 
logs, I no longer get consistency issues.

The issue I'm seeing is that writes to this ring will succeed, but it's 
possible for a subsequent read to return an older object.  For several minutes 
even.

I'm not sure if I did something wrong... learning as I go here and this list 
archive has been very useful.  But, is there anyway I can rebalance the node 
and get better consistency?

Thanks,
Ryan

Reply via email to