And now, when I have one node down with no chance of bringing it back
anytime soon, can I still change RF to 3 and get restore functionality
of my cluster? Should I run 'nodetool repair' or simple keyspace
update will suffice?

On Fri, Sep 2, 2011 at 1:55 PM, Nate McCall <n...@datastax.com> wrote:
> Yes - you would need at least 3 replicas per data center to use
> LOCAL_QUORUM and survive a node failure.
>
> On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev <oleg.tsvi...@gmail.com> wrote:
>> Do you mean I need to configure 3 replicas in each DC and keep using
>> LOCAL_QUORUM? In which case, if I'm following your logic, even one of
>> the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds?
>>
>> On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall <n...@datastax.com> wrote:
>>> In your options, you have configured 2 replicas for each data center:
>>> Options: [DC2:2, DC1:2]
>>>
>>> If one of those replicas is down, then LOCAL_QUORUM will fail as there
>>> is only one replica left 'locally.'
>>>
>>>
>>> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev <oleg.tsvi...@gmail.com> wrote:
>>>> from http://www.datastax.com/docs/0.8/consistency/index:
>>>>
>>>> <A “quorum” of replicas is essentially a majority of replicas, or RF /
>>>> 2 + 1 with any resulting fractions rounded down.>
>>>>
>>>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>>>> node goes down?
>>>>
>>>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall <n...@datastax.com> wrote:
>>>>> It looks like you only have 2 replicas configured in each data center?
>>>>>
>>>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>>>> QUORUM on RF=2 in a single DC cluster.
>>>>>
>>>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev <oleg.tsvi...@gmail.com> 
>>>>> wrote:
>>>>>> I believe I don't quite understand semantics of this exception:
>>>>>>
>>>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>>>> be enough replicas present to handle consistency level.
>>>>>>
>>>>>> Does it mean there *might be* enough?
>>>>>> Does it mean there *is not* enough?
>>>>>>
>>>>>> My case is as following - I have 3 nodes with keyspaces configured as 
>>>>>> following:
>>>>>>
>>>>>> Replication Strategy: 
>>>>>> org.apache.cassandra.locator.NetworkTopologyStrategy
>>>>>> Durable Writes: true
>>>>>> Options: [DC2:2, DC1:2]
>>>>>>
>>>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>>>> nodes can see any of the remaining 5.
>>>>>>
>>>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>>>> However, this morning one node went down and I started seeing the
>>>>>> HUnavailableException: : May not be enough replicas present to handle
>>>>>> consistency level.
>>>>>>
>>>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>>>> are sufficient for my configuration.
>>>>>>
>>>>>> Please help me to understand what's going on.
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to