Thanks Andi. The reason I was asking is that even though my nodes have been
100% available and no write has been rejected, when running an incremental
repair, the logs still indicate that some ranges are out of sync (which
then results in large amounts of compaction), how can this be possible?

I have found this (http://stackoverflow.com/a/20928922/980059) which seems
to indicate this could be because in parallel repairs, the merkle trees are
computed at different times, which results in repair thinking some ranges
are out of sync if the data has changed in the meantime.

Is that accurate? Does sequential repair alleviate this issue (since it
uses snapshots)?

Thanks
Flavien

On 19 January 2015 at 23:57, Andreas Finke <andreas.fi...@solvians.com>
wrote:

>  Hi,
>
>
>  right, QUORUM means that data is written to all replicas but the
> coordinator waits for QUORUM responses before returning back to client. If
> a replica is out of sync due to network or internal issue than consistency
> is ensured through:
>
>  - HintedHandoff (Automatically
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_hh_c.html
> )
> - ReadRepair (Automatically
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dmlClientRequestsRead.html
> )
> - nodetool repair (Manually
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html
> )
>
>  Regards
> Andi
>  ------------------------------
> *From:* Flavien Charlon [flavien.char...@gmail.com]
> *Sent:* 19 January 2015 22:50
> *To:* user@cassandra.apache.org
> *Subject:* How do replica become out of sync
>
>   Hi,
>
>  When writing to Cassandra using CL = Quorum (or anything less than ALL),
> is it correct to say that Cassandra tries to write to all the replica, but
> only waits for Quorum?
>
>  If so, what can cause some replica to become out of sync when they're
> all online?
>
>  Thanks
> Flavien
>

Reply via email to