017 00:25, "Durity, Sean R" wrote:
> Required maintenance for a cluster should not be this complicated and should
> not be changing so often. To me, this is a major flaw in Cassandra.
>
>
> Sean Durity
>
> From: Steinmaurer, Thomas [mailto:thomas.steinmau...@dyn
changing so often. To me, this is a major flaw in Cassandra.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
> *Sent:* Tuesday, September 19, 2017 2:33 AM
> *To:* user@cassandra.apache.org
> *Subject:* RE: M
Cassandra.
Sean Durity
From: Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
Sent: Tuesday, September 19, 2017 2:33 AM
To: user@cassandra.apache.org
Subject: RE: Multi-node repair fails after upgrading to 3.0.14
Hi Kurt,
thanks for the link!
Honestly, a pity, that in 3
:56
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
In 4.0 anti-compaction is no longer run after full repairs, so we should
probably backport this behavior to 3.0, given there are known limitations with
incremental repair on 3.0 and non-incremental
> From: kurt greaves [mailto:k...@instaclustr.com]
> Sent: Dienstag, 19. September 2017 06:24
> To: User
>
>
> Subject: Re: Multi-node repair fails after upgrading to 3.0.14
>
>
>
> https://issues.apache.org/jira/browse/CASSANDRA-13153 implies full repairs
> still trigge
t;
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
> *Sent:* Montag, 18. September 2017 16:10
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: Multi-node repair fails after upgrading to 3.0.14
>
>
>
> Sorry I may be wrong about the cause - didn't see -full
>
>
&g
ational
> POV.
>
> Thanks again.
>
> Thomas
>
> From: Jeff Jirsa [mailto:jji...@gmail.com]
> Sent: Montag, 18. September 2017 15:56
> To: user@cassandra.apache.org
> Subject: Re: Multi-node repair fails after upgrading to 3.0.14
>
> The command you'
Hi Jeff,
understood. That’s quite a change then coming from 2.1 from an operational POV.
Thanks again.
Thomas
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Montag, 18. September 2017 15:56
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
The
t; without printing a stack trace.
>
> The error message and stack trace isn’t really useful here. Any further
> ideas/experiences?
>
> Thanks,
> Thomas
>
> From: Alexander Dejanovski [mailto:a...@thelastpickle.com]
> Sent: Freitag, 15. September 2017 11:30
> To: u
t;
>
>
> Thanks,
>
> Thomas
>
>
>
> *From:* Alexander Dejanovski [mailto:a...@thelastpickle.com]
> *Sent:* Freitag, 15. September 2017 11:30
>
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: Multi-node repair fails after upgrading to 3.0.14
>
>
: Freitag, 15. September 2017 11:30
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
Right, you should indeed add the "--full" flag to perform full repairs, and you
can then keep the "-pr" flag.
I'd advise to monitor the status o
Few notes:
- in 3.0 the default changed to incremental repair which will have to
anticompact sstables to allow you to repair the primary ranges you've specified
- since you're starting the repair on all nodes at the same time, you end up
with overlapping anticompactions
Generally you should stag
Alex,
thanks again! We will switch back to the 2.1 behavior for now.
Thomas
From: Alexander Dejanovski [mailto:a...@thelastpickle.com]
Sent: Freitag, 15. September 2017 11:30
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
Right, you should indeed
; partition range (-pr) option, but with 3.0 we additionally have to provide
> the –full option, right?
>
>
>
> Thanks again,
>
> Thomas
>
>
>
> *From:* Alexander Dejanovski [mailto:a...@thelastpickle.com]
> *Sent:* Freitag, 15. September 2017 09:45
> *To:* user@
with the
partition range (-pr) option, but with 3.0 we additionally have to provide the
–full option, right?
Thanks again,
Thomas
From: Alexander Dejanovski [mailto:a...@thelastpickle.com]
Sent: Freitag, 15. September 2017 09:45
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails
Hi Thomas,
in 2.1.18, the default repair mode was full repair while since 2.2 it is
incremental repair.
So running "nodetool repair -pr" since your upgrade to 3.0.14 doesn't
trigger the same operation.
Incremental repair cannot run on more than one node at a time on a cluster,
because you risk to
Hello,
we are currently in the process of upgrading from 2.1.18 to 3.0.14. After
upgrading a few test environments, we start to see some suspicious log entries
regarding repair issues.
We have a cron job on all nodes basically executing the following repair call
on a daily basis:
nodetool rep
17 matches
Mail list logo