RE: Cleanup blocking snapshots - Options?

2018-01-14 Thread Steinmaurer, Thomas
Hi Kurt, it was easily triggered with the mentioned combination (cleanup after extending the cluster) a few months ago, thus I guess it will be the same when I re-try. Due to the issue we simply omitted running cleanup then, but as disk space is becoming some sort of bottle-neck again, we need

RE: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-14 Thread Steinmaurer, Thomas
Ben, regarding OS/VM level patching impact. We see almost zero additional impact with 4.9.51-10.52.amzn1.x86_64 vs. 4.9.75-25.55.amzn1.x86_64 (https://alas.aws.amazon.com/ALAS-2018-939.html) on a m5.2xlarge. m5 instance type family is rather new and AWS told us to give them a try compared to m4

Re: Cleanup blocking snapshots - Options?

2018-01-14 Thread kurt greaves
Disabling the snapshots is the best and only real option other than upgrading at the moment. Although apparently it was thought that there was only a small race condition in 2.1 that triggered this and it wasn't worth fixing. If you are triggering it easily maybe it is worth fixing in 2.1 as well.

Re: Even after the drop table, the data actually was not erased.

2018-01-14 Thread Eunsu Kim
Thank you for your response. As you said, the auto_bootstrap setting was turned on. The actual data was deleted with the 'nodetool clearsnapshot' command. This command seems to apply only to one node. Can it be applied cluster-wide? Or should I run this command on each node? > On 12 Jan 2018

Cleanup blocking snapshots - Options?

2018-01-14 Thread Steinmaurer, Thomas
Hello, we are running 2.1.18 with vnodes in production and due to (https://issues.apache.org/jira/browse/CASSANDRA-11155) we can't run cleanup e.g. after extending the cluster without blocking our hourly snapshots. What options do we have to get rid of partitions a node does not own anymore? *