Thank you all for your help.
I was able to get rid of zombies (at least end users not reporting that
anymore) using nodetool cleanup.
And old SSTables were indeed unable to merge with each other because of
repairedAt > 0, so cassandra stop + sstablerepairedset + cassandra start in
rolling manner d
X==5. I was meant to fill that in...
On 16 Dec. 2017 07:46, "kurt greaves" wrote:
> Yep, if you don't run cleanup on all nodes (except new node) after step x,
> when you decommissioned node 4 and 5 later on, their tokens will be
> reclaimed by the previous owner. Suddenly the data in those SSTab
Yep, if you don't run cleanup on all nodes (except new node) after step x,
when you decommissioned node 4 and 5 later on, their tokens will be
reclaimed by the previous owner. Suddenly the data in those SSTables is now
live again because the token ownership has changed and any data in those
SStable
Hi Max,
I don't know if it's related to your issue but on a side note, if you
decide to use Reaper (and use full repairs, not incremental ones), but mix
that with "nodetool repair", you'll end up with 2 pools of SSTables that
cannot get compacted together.
Reaper uses subrange repair which doesn't
The generation (integer id in file names) doesn’t matter for ordering like this
It matters in schema tables for addition of new columns/types, but it’s
irrelevant for normal tables - you could do a user defined compaction on 31384
right now and it’d be rewritten as-is (minus purgable data) with
Hi, Kurt.
Thank you for response.
Repairs are marked as 'done' without errors in reaper history.
Example of 'wrong order':
* file mc-31384-big-Data.db contains tombstone:
{
"type" : "row",
"position" : 7782,
"clustering" : [ "9adab970-b46d-11e7-a5cd-a1ba8cfc1426"
Hello, Jeff.
Using your hint I was able to reproduce my situation on 5 VMs.
Simplified steps are:
1) set up 3-node cluster
2) create keyspace with RF=3 and table with gc_grace_seconds=60,
compaction_interval=10 and unchecked_tombstone_compaction=true (to force
compaction later)
3) insert 10..2
Are you positive your repairs are completing successfully? Can you send
through an example of the data in the wrong order? What you're saying
certainly shouldn't happen, but there's a lot of room for mistakes.
On 14 Dec. 2017 20:13, "Python_Max" wrote:
> Thank you for reply.
>
> No, I did not ex
Thank you for reply.
No, I did not execute 'nodetool cleanup'. Documentation
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRemoveNode.html
does not mention that cleanup is required.
Do yo think that extra data which node is not responsible for can lead
to zombie data?
Did you run cleanup before you shrank the cluster?
--
Jeff Jirsa
> On Dec 13, 2017, at 4:49 AM, Python_Max wrote:
>
> Hello.
>
> I have a situation similar to
> https://issues.apache.org/jira/browse/CASSANDRA-13153 except mine cassandra
> is 3.11.1 (that issue should be fixed according to
10 matches
Mail list logo