The LOCAL_QUORUM works on the available replicas in the dc. So if your
replication factor is 2 and you have 10 nodes you can still only loose 1.
With a replication factor of 3 you can loose one node and still satisfy the
query.
Ryan Svihla <r...@foundev.pro> schrieb am Do. 9. März 2017 um 18:09:

> whats your keyspace replication settings and what's your query?
>
> On Thu, Mar 9, 2017 at 9:32 AM, Shalom Sagges <shal...@liveperson.com>
> wrote:
>
> Hi Cassandra Users,
>
> I hope someone could help me understand the following scenario:
>
> Version: 3.0.9
> 3 nodes per DC
> 3 DCs in the cluster.
> Consistency Local_Quorum.
>
> I did a small resiliency test and dropped a node to check the availability
> of the data.
> What I assumed would happen is nothing at all. If a node is down in a 3
> nodes DC, Local_Quorum should still be satisfied.
> However, during the ~10 first seconds after stopping the service, I got
> timeout errors (tried it both from the client and from cqlsh.
>
> This is the error I get:
> *ServerError:
> com.google.common.util.concurrent.UncheckedExecutionException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException:
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
> received only 4 responses.*
>
>
> After ~10 seconds, the same query is successful with no timeout errors.
> The dropped node is still down.
>
> Any idea what could cause this and how to fix it?
>
> Thanks!
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
>
> --
>
> Thanks,
> Ryan Svihla
>
>

Reply via email to