Hi, Kurt.
Thank you for response.
Repairs are marked as 'done' without errors in reaper history.
Example of 'wrong order':
* file mc-31384-big-Data.db contains tombstone:
{
"type" : "row",
"position" : 7782,
"clustering" : [ "9adab970-b46d-11e7-a5cd-a1ba8cfc1426" ],
"deletion_info" : { "marked_deleted" :
"2017-10-28T04:51:20.589394Z", "local_delete_time" :
"2017-10-28T04:51:20Z" },
"cells" : [ ]
}
* file mc-31389-big-Data.db contains data:
{
"type" : "row",
"position" : 81317,
"clustering" : [ "9adab970-b46d-11e7-a5cd-a1ba8cfc1426" ],
"liveness_info" : { "tstamp" : "2017-10-19T01:34:10.055389Z" },
"cells" : [...]
}
Index 31384 is less than 31389 but I'm not sure whether it matters at all.
I assume that data and tombsones are not compacting due to another
reason: the tokens are not owned by that node anymore and the only way
to purge such keys is 'nodetool cleanup', isn't it?
On 14.12.17 16:14, kurt greaves wrote:
Are you positive your repairs are completing successfully? Can you
send through an example of the data in the wrong order? What you're
saying certainly shouldn't happen, but there's a lot of room for mistakes.
On 14 Dec. 2017 20:13, "Python_Max" <python....@gmail.com
<mailto:python....@gmail.com>> wrote:
Thank you for reply.
No, I did not execute 'nodetool cleanup'. Documentation
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRemoveNode.html
<https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRemoveNode.html>
does not mention that cleanup is required.
Do yo think that extra data which node is not responsible for can
lead to zombie data?
On 13.12.17 18:43, Jeff Jirsa wrote:
Did you run cleanup before you shrank the cluster?
--
Best Regards,
Python_Max.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
<mailto:user-unsubscr...@cassandra.apache.org>
For additional commands, e-mail: user-h...@cassandra.apache.org
<mailto:user-h...@cassandra.apache.org>
happen
--
Best Regards,
Python_Max.