Thanks Maki,

That makes sense with my symptoms...  I was doing a CL=ONE for write and a 
CL=ALL for read, expecting that to be sufficient.  

I will try both set to ALL and see if I get better consistency.

-Ryan

On May 14, 2011, at 4:41 AM, Maki Watanabe wrote:

> It depends on what you really use which CL for your operations.
> Your RF is 2, so if you read/write with CL=ALL, your r/w will be
> always consistent. If your read is CL=ONE, you have chance to read old
> data anytime, decommission is not matter. CL=QUORUM on RF=2 is
> semantically identical with CL=ALL.
> 
> maki
> 
> 2011/5/13 Ryan Hadley <r...@sgizmo.com>:
>> Hi,
>> 
>> I'm running Cassandra (0.7.4) on a 4 node ring.  It was a 3 node ring, but 
>> we ended up expanding it to 4... So then I followed the many suggestions to 
>> rebalance the ring.  I found a script that suggested I use:
>> 
>> # ~/nodes_calc.py
>> How many nodes are in your cluster? 4
>> node 0: 0
>> node 1: 42535295865117307932921825928971026432
>> node 2: 85070591730234615865843651857942052864
>> node 3: 127605887595351923798765477786913079296
>> 
>> So I started to migrate each node to those tokens.
>> 
>> I have my replication factor set to 2, so I guess I was expecting to be able 
>> to continue to use this ring without issues.  But it seems that the node 
>> still accepts writes while it's decommissioning?  I say this because if I 
>> interrupt the decommission by stopping Cassandra and starting it again, it 
>> appears to run through several commit logs.  And as soon as it's through 
>> with those commit logs, I no longer get consistency issues.
>> 
>> The issue I'm seeing is that writes to this ring will succeed, but it's 
>> possible for a subsequent read to return an older object.  For several 
>> minutes even.
>> 
>> I'm not sure if I did something wrong... learning as I go here and this list 
>> archive has been very useful.  But, is there anyway I can rebalance the node 
>> and get better consistency?
>> 
>> Thanks,
>> Ryan

Reply via email to