Ah cool, I didn't realize reaper did that.
On October 30, 2017 at 1:29:26 PM, Paulo Motta (pauloricard...@gmail.com) wrote:
> This is also the case for full repairs, if I'm not mistaken. Assuming I'm not
> missing something here, that should mean that he shouldn't need to mark
> sstables as unr
> This is also the case for full repairs, if I'm not mistaken. Assuming I'm not
> missing something here, that should mean that he shouldn't need to mark
> sstables as unrepaired?
That's right, but he mentioned that he is using reaper which uses
subrange repair if I'm not mistaken, which doesn't
> Once you run incremental repair, your data is permanently marked as repaired
This is also the case for full repairs, if I'm not mistaken. I'll admit I'm not
as familiar with the quirks of repair in 2.2, but prior to 4.0/CASSANDRA-9143,
any global repair ends with an anticompaction that marks s
Yes mark them as unrepaired first. You can get sstablerepairedset from
source if you need (probably make sure you get the correct branch/tag).
It's just a shell script so as long as you have C* installed in a
default/canonical location it should work.
https://github.com/apache/cassandra/blob/trunk/
re that out too.
From: Paulo Motta
Sent: Sunday, October 29, 2017 1:56:38 PM
To: user@cassandra.apache.org
Subject: Re: Need help with incremental repair
> Assuming the situation is just "we accidentally ran incremental repair", you
> shouldn't have to do anything. It
> Assuming the situation is just "we accidentally ran incremental repair", you
> shouldn't have to do anything. It's not going to hurt anything
Once you run incremental repair, your data is permanently marked as
repaired, and is no longer compacted with new non-incrementally
repaired data. This c
Hey Aiman,
Assuming the situation is just "we accidentally ran incremental repair", you
shouldn't have to do anything. It's not going to hurt anything. Pre-4.0
incremental repair has some issues that can cause a lot of extra streaming, and
inconsistencies in some edge cases, but as long as you'
Hi everyone,
We seek your help in a issue we are facing in our 2.2.8 version.
We have 24 nodes cluster spread over 3 DCs.
Initially, when the cluster was in a single DC we were using The Last Pickle
reaper 0.5 to repair it with incremental repair set to false. We added 2 more
DCs. Now the prob