hen it routed queries to the downed node? The key in the client is to
> keep working around the ring if the initial node is down.
>
> --Joe
>
> On Apr 9, 2011, at 12:52 PM, Vram Kouramajian wrote:
>
>> We have a 5 Cassandra nodes with the following configuration:
>>
>
We have a 5 Cassandra nodes with the following configuration:
Casandra Version: 0.6.11
Number of Nodes: 5
Replication Factor: 3
Client: Hector 0.6.0-14
Write Consistency Level: Quorum
Read Consistency Level: Quorum
Ring Topology:
OwnsRange Ring
13275670
Thanks Peter for the reply. We are currently "fixing" our inconsistent
data (since we have master data saved) .
We will follow your suggestion and we will run Node Repair tool more
often in the future. However, what happens to data inserted/deleted
after Node Repair tool runs (i.e., between Node R
Thank you all for your assistance. I has been very helpful.
I have few more questions:
1. If we change the write/delete consistency level to ALL, do we
eliminate the data inconsistency among nodes (since the delete
operations will apply to ALL replicas)?
2. My understanding is that "Read Repair"
We are running a five node cluster in production with a replication
factor of three. The query results from the 5 nodes are returning
different results (2 of the 5 nodes return extra columns for the same
row key).
We are not sure the root of the problem (any config issues). Any suggestions?
Thank
We have implemented a distributed queue (similar to AWS SQS) and a job
queue in Cassandra.
Vram
On Sat, Jun 26, 2010 at 1:56 PM, Andrew Miklas wrote:
> Hi all,
>
> Has anyone written a work-queue implementation using Cassandra?
>
> There's a section in the UseCase wiki page for "A distributed