> We will follow your suggestion and we will run Node Repair tool more
> often in the future. However, what happens to data inserted/deleted
> after Node Repair tool runs (i.e., between Node Repair and Major
> Compaction).
It is handled as you would expect; deletions are propagated across the
clus
Thanks Peter for the reply. We are currently "fixing" our inconsistent
data (since we have master data saved) .
We will follow your suggestion and we will run Node Repair tool more
often in the future. However, what happens to data inserted/deleted
after Node Repair tool runs (i.e., between Node R
> I have few more questions:
>
> 1. If we change the write/delete consistency level to ALL, do we
> eliminate the data inconsistency among nodes (since the delete
> operations will apply to ALL replicas)?
>
> 2. My understanding is that "Read Repair" doesn't handle tombstones.
> How about "Node Too
Thank you all for your assistance. I has been very helpful.
I have few more questions:
1. If we change the write/delete consistency level to ALL, do we
eliminate the data inconsistency among nodes (since the delete
operations will apply to ALL replicas)?
2. My understanding is that "Read Repair"
> Short version: Once GCGraceSeconds expires, the tombstone is no longer
> relevant as will not be included in RR (otherwise, you could have
> nodes that haven't compacted yet, RR tombstones to other replicas that
> had removed it). Long version: see my last comment on
> https://issues.apache.org/
Just to add, the cli works at CL One. What do you see what you use a higher CL
through an API?
A
On 11/01/2011, at 10:31 AM, Peter Schuller wrote:
>> above. From looking at the data I'm guessing that the results from the 3
>> nodes are correct and the results from the 2 nodes are old (the dif
On Mon, Jan 10, 2011 at 3:31 PM, Peter Schuller
wrote:
> Now, GCGraceSeconds/repair issues would not explain, to me at least,
> why read-repair is not fixing the discrepancy.
Short version: Once GCGraceSeconds expires, the tombstone is no longer
relevant as will not be included in RR (otherwise,
> above. From looking at the data I'm guessing that the results from the 3
> nodes are correct and the results from the 2 nodes are old (the diff between
> the result sets is that the 54 is a subset of the 68).
If I interpret the thread correctly, those 2 that you say you believe
are old are the
Hi, Tyler,
I'm working with Vram on this project and can respond to your questions.
We do indeed continue to get inconsistent data after many read operations.
These columns and rowkeys are months old and they have had many reads done
on them over that time. Using cassandra-cli we see that on 3
What version of Cassandra? What consistency level are you writing/reading
at?
Do you continue to get inconsistent results when you read the same data over
and over (i.e. read repair is not fixing something)?
Do all of your nodes show the same thing when you run nodetool ring against
them?
- Tyl
We are running a five node cluster in production with a replication
factor of three. The query results from the 5 nodes are returning
different results (2 of the 5 nodes return extra columns for the same
row key).
We are not sure the root of the problem (any config issues). Any suggestions?
Thank
11 matches
Mail list logo