Javier Canillas wrote:
> 
> HH is some kind of write repair, so it has nothing to do with CL that is a
> requirement of the operation; and it won't be used over reads.
> 
> In your example QUORUM is the same as ALL, since you only have 1 RF (only
> the data holder - coordinator). If that node fails, all read / writes will
> fail.
> 
> Now, on another scenario, with RF = 3 and 1 node down:
> 
> CL = QUORUM. Will work, but the coordination will mark an HH over the
> write
> and attempt to do it for some time over the failed node. Despite this, the
> operation will success for the client.
> CL = ALL. Will fail.
> CL = ONE. Will work. 2 HH will be sent to replicas to perform the update.
> 
> *Consider CL is the client minimum requirement over an operation to
> succeed*.
> If the cluster can assure that value, then the operation will succeed and
> returned to the client (despite some HH work needs to be done after), if
> not
> an error response will be returned.
> 
> 
> On Thu, Feb 24, 2011 at 4:26 PM, mcasandra <mohitanch...@gmail.com> wrote:
> 
>>
>> Does HH count towards QUORUM? Say  RF=1 and CL of W=QUORUM and one node
>> that
>> owns the key dies. Would subsequent write operations for that key be
>> successful? I am guessing it will not succeed.
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Understand-eventually-consistent-tp6038330p6061593.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
>>
> 
> 

Thanks! In above scenario what happens if 2 nodes die and RF=3, CL of
W=QUORUM. Would a write succeed since one write can be made to coordinator
node with HH and other to the replica node that is up.

And similarly in above scenario would read succeed. Would HH be considered
towards CL in this case?
-- 
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Understand-eventually-consistent-tp6038330p6061772.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.

Reply via email to