Just confirming. Thanks for the clarification.
On Tue, Jul 12, 2011 at 10:53 AM, Peter Schuller
wrote:
>> From "Cassandra the definitive guide" - Basic Maintenance - Repair
>> "Running nodetool repair causes Cassandra to execute a major compaction.
>> During a major compaction (see “Compactio
> From "Cassandra the definitive guide" - Basic Maintenance - Repair
> "Running nodetool repair causes Cassandra to execute a major compaction.
> During a major compaction (see “Compaction” in the Glossary), the
> server initiates a
> TreeRequest/TreeReponse conversation to exchange Merkle tree
>From "Cassandra the definitive guide" - Basic Maintenance - Repair
"Running nodetool repair causes Cassandra to execute a major compaction.
During a major compaction (see “Compaction” in the Glossary), the
server initiates a
TreeRequest/TreeReponse conversation to exchange Merkle trees with ne
Never mind. I see the issue with this. I will be able to catch the
writes as failed only if I set CL=ALL. For other CLs, I may not know
that it failed on some node.
On Mon, Jul 11, 2011 at 2:33 PM, A J wrote:
> Instead of doing nodetool repair, is it not a cheaper operation to
> keep tab of faile
Instead of doing nodetool repair, is it not a cheaper operation to
keep tab of failed writes (be it deletes or inserts or updates) and
read these failed writes at a set frequency in some batch job ? By
reading them, RR would get triggered and they would get to a
consistent state.
Because these wou
that's an internal term meaning "background i/o," not sstable merging per se.
On Fri, Jul 8, 2011 at 9:24 AM, A J wrote:
> I think node repair involves some compaction too. See the issue:
> https://issues.apache.org/jira/browse/CASSANDRA-2811
> It talks of 'validation compaction' being triggered
I think node repair involves some compaction too. See the issue:
https://issues.apache.org/jira/browse/CASSANDRA-2811
It talks of 'validation compaction' being triggered concurrently
during node repair.
On Thu, Jun 30, 2011 at 8:51 PM, Watanabe Maki wrote:
> Repair doesn't compact. Those are diff
Repair doesn't compact. Those are different processes already.
maki
On 2011/07/01, at 7:21, A J wrote:
> Thanks all !
> In other words, I think it is safe to say that a node as a whole can
> be made consistent only on 'nodetool repair'.
>
> Has there been enough interest in providing anti-ent
On Thu, Jun 30, 2011 at 5:27 PM, Jonathan Ellis wrote:
> On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo
> wrote:
> > Read repair does NOT repair tombstones.
>
> It does, but you can't rely on RR to repair _all_ tombstones, because
> RR only happens if the row in question is requested by a clie
It would be helpful if this was automated some how.
Thanks all !
In other words, I think it is safe to say that a node as a whole can
be made consistent only on 'nodetool repair'.
Has there been enough interest in providing anti-entropy without
compaction as a separate operation (nodetool repair does both) ?
On Thu, Jun 30, 2011 at 5:27 PM, Jonat
On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo wrote:
> Read repair does NOT repair tombstones.
It does, but you can't rely on RR to repair _all_ tombstones, because
RR only happens if the row in question is requested by a client.
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder o
As I understand, it has to do with a node being up but missing the delete
message (remember, if you apply the delete at CL.QUORUM, you can have almost
half the replicas miss it and still succeed). Imagine that you have 3 nodes A,
B, and C, each of which has a column 'foo' with a value 'bar'. The
On Thu, Jun 30, 2011 at 4:25 PM, A J wrote:
> I am little confused of the reason why nodetool repair has to run
> within GCGraceSeconds.
>
> The documentation at:
> http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
> is not very clear to me.
>
> How can a delete be 'unforgo
14 matches
Mail list logo