Disabling the snapshots is the best and only real option other than
upgrading at the moment. Although apparently it was thought that there was
only a small race condition in 2.1 that triggered this and it wasn't worth
fixing. If you are triggering it easily maybe it is worth fixing in 2.1 as
well. Does this happen consistently? Can you provide some more logs on the
JIRA or better yet a way to reproduce?

On 14 January 2018 at 16:12, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Hello,
>
>
>
> we are running 2.1.18 with vnodes in production and due to (
> https://issues.apache.org/jira/browse/CASSANDRA-11155) we can’t run
> cleanup e.g. after extending the cluster without blocking our hourly
> snapshots.
>
>
>
> What options do we have to get rid of partitions a node does not own
> anymore?
>
> ·         Using a version which has this issue fixed, although upgrading
> to 2.2+, due to various issues, is not an option at the moment
>
> ·         Temporarily disabling the hourly cron job before starting
> cleanup and re-enable after cleanup has finished
>
> ·         Any other way to re-write SSTables with data a node owns after
> a cluster scale out
>
>
>
> Thanks,
>
> Thomas
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freist
> <https://maps.google.com/?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
> ädterstra
> <https://maps.google.com/?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
> ße 313
> <https://maps.google.com/?q=4040+Linz,+Austria,+Freist%C3%A4dterstra%C3%9Fe+313&entry=gmail&source=g>
>

Reply via email to