Actually you can't. As explained in the wiki page linked:
"A hinted write does NOT count towards ConsistencyLevel requirements
of ONE, QUORUM, or ALL"

For CL.QUORUM, you do need QUORUM *replicas* to be alive to answer
the query. At RF=2, QUORUM=2 so no, you cannot take down any node
down or (some) quorum writes/reads will result in UnavailableException. And this
is not related to the number of node you have, only to the replication factor.

If you want to support having a node down, you need to have RF=3. For that
very reason, this is the minimum replication factor I would advise for a
production cluster.

--
Sylvain


On Sun, Nov 28, 2010 at 6:11 PM, Jake Luciani <jak...@gmail.com> wrote:
> If you read/write data with quorum then you can safely take a node down in
> this scenario.  Subsequent writes will use hinted handoff to be passed to
> the node when it comes back up.
> More info is here: http://wiki.apache.org/cassandra/HintedHandoff
>
> Does that answer your question?
> -Jake
>
> On Sun, Nov 28, 2010 at 9:42 AM, Ran Tavory <ran...@gmail.com> wrote:
>>
>> to me it makes sense that if hinted handoff is off then cassandra cannot
>> satisfy 2 out of every 3rd writes writes when one of the nodes is down since
>> this node is the designated node of 2/3 writes.
>> But I don't remember reading this somewhere. Does hinted handoff affect
>> David's situation?
>> (David, did you disable HH in your storage-config?
>> <HintedHandoffEnabled>false</HintedHandoffEnabled>)
>>
>> On Sun, Nov 28, 2010 at 4:32 PM, David Boxenhorn <da...@lookin2.com>
>> wrote:
>>>
>>> For the vast majority of my data usage eventual consistency is fine (i.e.
>>> CL=ONE) but I have a small amount of critical data for which I read and
>>> write using CL=QUORUM.
>>>
>>> If I have a cluster with 3 nodes and RF=2, and CL=QUORUM does that mean
>>> that a value can be read from or written to any 2 nodes, or does it have to
>>> be the particular 2 nodes that store the data? If it is the particular 2
>>> nodes that store the data, that means that I can't even take down one node,
>>> since it will be the mandatory 2nd node for 1/3 of my data...
>>
>>
>>
>> --
>> /Ran
>
>

Reply via email to