Hi everyone,
Could you please answer the following questions regarding materialized views or
point me to the right direction in the documentation? We are currently using
Cassandra v4.0.11.
1. Are incremental repairs supported for the base table of Materialized
views?
2. Are incremental
We're on 4.0.1 and switched to incremental repairs a couple of months ago.
> They work fine about 95% of the time, but once in a while a session will
> get stuck and will have to be cancelled (with `nodetool repair_admin cancel
> -s `). Typically the session will be in REPAIRING but nothing
Could you file a jira with the details?
Dinesh
> On Nov 26, 2021, at 2:40 PM, James Brown wrote:
>
>
> We're on 4.0.1 and switched to incremental repairs a couple of months ago.
> They work fine about 95% of the time, but once in a while a session will get
> st
We're on 4.0.1 and switched to incremental repairs a couple of months ago.
They work fine about 95% of the time, but once in a while a session will
get stuck and will have to be cancelled (with `nodetool repair_admin cancel
-s `). Typically the session will be in REPAIRING but nothing
t anticompaction on the
first run, I'd recommend to:
- mark all sstables as repaired
- run a full repair
- schedule very regular (daily) incremental repairs
Bye,
Alex
Le jeu. 16 sept. 2021 à 23:03, C. Scott Andreas a
écrit :
> Hi James, thanks for reaching out.
>
> A large n
list about whether incremental repairs are fatally flawed in Cassandra 3.x or whether
they're still a good default. What's the current best thinking? The most recent 3.x
documentation still advocates in favor of using incremental repairs...CASSANDRA-9143
is marked as fixed in 4.0; did
There's been a lot of back and forth on the wider Internet and in this
mailing list about whether incremental repairs are fatally flawed in
Cassandra 3.x or whether they're still a good default. What's the current
best thinking? The most recent 3.x documentation
<http://cassan
Hi
We would like to migrate from incremental repairs to regular full repairs
on cassandra cluster running on 3.11 apache cassandra . There is a
procedure for it for datastax mentioned inside the document mentioed below
but the nodetool option mentioned inside the document is not available for
you can get away with loading from only one node if you're positive all
data is consistent. A repair prior to loading should be enough, but if that
doesn't work just load from all nodes.
On 11 Nov. 2017 23:15, "Brice Figureau"
wrote:
> On 10/11/17 21:18, kurt greaves wrote:
> > If everything goe
On 10/11/17 21:18, kurt greaves wrote:
> If everything goes smoothly the next incremental should cut it, but a
> full repair post load is probably a good idea anyway. Make sure you
> sstableload every sstable from every node if you want to keep consistency.
If the previous cluster had 3 nodes with
If everything goes smoothly the next incremental should cut it, but a full
repair post load is probably a good idea anyway. Make sure you sstableload
every sstable from every node if you want to keep consistency.
ed to incremental repairs when I
moved it to 3.0.
Should I need to perform again a full repair after migrating or is
running daily incremental enough?
Thanks!
--
Brice Figureau
-
To unsubscribe, e-mail: user-uns
node- communication network cards on
your C* host machines.
§ If possible, reduce # of vnodes!
From: Chris Stokesmore [mailto:chris.elsm...@demandlogic.co]
Sent: Monday, June 19, 2017 4:50 AM
To: anujw_2...@yahoo.co.in
Cc: user@cassandra.apache.org
Subject: Re: Partition range incremental rep
ke 8-9 hours.
>>
>> As I understand it, using incremental should have sped this process up as
>> all three sets of data on each repair job should be marked as repaired
>> however this does not seem to be the case. Any ideas?
>>
>> Chris
>>
>>> On 6
tps://issues.apache.org/jira/browse/CASSANDRA-9143>.
>
> TL;DR: Do not use incremental repair before 4.0.
Hi Jonathan,
Thanks for your reply, this is a slightly scary message for us! 2.2 has been
out for nearly 2 years and incremental repairs are the default - and it has
horrible b
> Chris
>
>> On 6 Jun 2017, at 16:08, Anuj Wadehra > <mailto:anujw_2...@yahoo.co.in.INVALID>> wrote:
>>
>> Hi Chris,
>>
>> Using pr with incremental repairs does not make sense. Primary range repair
>> is an optimization over full repair. If you run
rstand it, using incremental should have sped this process up as
> all three sets of data on each repair job should be marked as repaired
> however this does not seem to be the case. Any ideas?
>
> Chris
>
> On 6 Jun 2017, at 16:08, Anuj Wadehra
> wrote:
>
> Hi Chris,
>
understand it, using incremental should have sped this process up as all
three sets of data on each repair job should be marked as repaired however this
does not seem to be the case. Any ideas?
Chris
On 6 Jun 2017, at 16:08, Anuj Wadehra wrote:
Hi Chris,
Using pr with incremental repairs does not
up as all
three sets of data on each repair job should be marked as repaired however this
does not seem to be the case. Any ideas?
Chris
> On 6 Jun 2017, at 16:08, Anuj Wadehra wrote:
>
> Hi Chris,
>
> Using pr with incremental repairs does not make sense. Primary range
Hi Chris,
Using pr with incremental repairs does not make sense. Primary range repair is
an optimization over full repair. If you run full repair on a n node cluster
with RF=3, you would be repairing each data thrice. E.g. in a 5 node cluster
with RF=3, a range may exist on node A,B and C
p://docs.datastax.com/en/archived/cassandra/2.2/cassandra/tools/toolsRepair.html
> says 'Performing partitioner range repairs by using the -pr option is
> generally considered a good choice for doing manual repairs. However, this
> option cannot be used with incremental repairs (def
, this
option cannot be used with incremental repairs (default for Cassandra 2.2 and
later).
Only problem is our -pr repairs were taking about 8 hours, and now the non-pr
repair are taking 24+ - I guess this makes sense, repairing 1/7 of data
increased to 3/7, except I was hoping to see a speed up
e with incremental
>>> > repair, which is what -pr intended to fix on full repair, by repairing
>>> all
>>> > token ranges only once instead of times the replication factor.
>>> >
>>> > Cheers,
>>> >
>>> >
; Hey,
>> > >
>> > > In the datastax documentation on repair [1], it says,
>> > >
>> > > "The partitioner range option is recommended for routine maintenance.
>> Do
>> > > not use it to repair a downed node. Do not use with incremental repai
tion is recommended for routine maintenance.
> Do
> > > not use it to repair a downed node. Do not use with incremental repair
> > > (default for Cassandra 3.0 and later)."
> > >
> > > Why is it not recommended to use -pr with incremental repairs?
> &
recommended for routine maintenance. Do
> > not use it to repair a downed node. Do not use with incremental repair
> > (default for Cassandra 3.0 and later)."
> >
> > Why is it not recommended to use -pr with incremental repairs?
> >
> > Thanks,
> >
>
Can't say I have too many ideas. If load is low during the repair it
shouldn't be happening. Your disks aren't overutilised correct? No other
processes writing loads of data to them?
That is not happening anymore since I am repairing a keyspace with
much less data (the other one is still there in write-only mode).
The command I am using is the most boring (even shed the -pr option so
to keep anticompactions to a minimum): nodetool -h localhost repair
It's executed sequentially
Blowing out to 1k SSTables seems a bit full on. What args are you passing
to repair?
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com
On 31 October 2016 at 09:49, Stefano Ortolani wrote:
> I've collected some more data-points, and I still see dropped
> mutations with compaction_throughput_
I've collected some more data-points, and I still see dropped
mutations with compaction_throughput_mb_per_sec set to 8.
The only notable thing regarding the current setup is that I have
another keyspace (not being repaired though) with really wide rows
(100MB per partition), but that shouldn't have
Thanks.
Sean
From: Alexander Dejanovski [a...@thelastpickle.com]
Sent: Monday, October 24, 2016 10:39 AM
To: user@cassandra.apache.org
Subject: Re: incremental repairs with -pr flag?
Hi Sean,
In order to mitigate its impact, anticompaction is not fully executed
> Why is it not recommended to use -pr with incremental repairs?
>
> Thanks,
>
> Sean
>
> [1]
> https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
> --
>
> Sean Bridges
>
> senior systems architect
> Global Relay
&g
e -pr with incremental repairs?
Thanks,
Sean
[1]
https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
--
Sean Bridges
senior systems architect
Global Relay
_sean.bridges@globalrelay.net_ <mailto:sean.brid...@globalrelay.net>
*866.484.6630 *
Ne
probably because i was looking the wrong version of the codebase :p
There aren't that many tools I know to orchestrate repairs and we
>> maintain a fork of Reaper, that was made by Spotify, and handles
>> incremental repair : https://github.com/thelastpickle/cassandra-reaper
>>
>>
>> Looks like you're using subranges with
, Alexander Dejanovski
> wrote:
>
> There aren't that many tools I know to orchestrate repairs and we maintain
> a fork of Reaper, that was made by Spotify, and handles incremental repair
> : https://github.com/thelastpickle/cassandra-reaper
>
>
> Looks like you
e you're using subranges with incremental repairs. This will
generate a lot of anticompactions as you'll only repair a portion of the
SSTables. You should use forceRepairAsync for incremental repairs so that
it's possible for the repair to act on the whole SSTable, minimising
ant
Sorry I shouldn't have said adding a node. Sometimes data seems to be corrupted
or inconsistent in which case would like to run a repair.
Sent from my iPhone
> On Oct 19, 2016, at 10:10 AM, Sean Bridges
> wrote:
>
> Thanks, we will try that.
>
> Sean
>
>> On 16-10-19 09:34 AM, Alexander De
There aren't that many tools I know to orchestrate repairs and we maintain
a fork of Reaper, that was made by Spotify, and handles incremental repair
: https://github.com/thelastpickle/cassandra-reaper
We just added Cassandra as storage back end (only postgres currently) in
one of the branches, wh
Thanks, we will try that.
Sean
On 16-10-19 09:34 AM, Alexander Dejanovski wrote:
Hi Sean,
you should be able to do that by running subrange repairs, which is
the only type of repair that wouldn't trigger anticompaction AFAIK.
Beware that now you will have sstables marked as repaired and other
Can you explain why you would want to run repair for new nodes?
Aren't you talking about bootstrap, which is not related to repair actually?
Le mer. 19 oct. 2016 18:57, Kant Kodali a écrit :
> Thanks! How do I do an incremental repair when I add a new node?
>
> Sent from my iPhone
>
> On Oct 19
Also any suggestions on a tool to orchestrate the incremental repair? Like say
most commonly used
Sent from my iPhone
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair process
> in smaller yet s
Thanks! How do I do an incremental repair when I add a new node?
Sent from my iPhone
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair process
> in smaller yet sequential pieces of work (repair is
Hi Kant,
subrange is a form of full repair, so it will just split the repair process
in smaller yet sequential pieces of work (repair is started giving a start
and end token). Overall, you should not expect improvements other than
having less overstreaming and better chances of success if your clu
Another question on a same note would be what would be the fastest way to do
repairs of size 10TB cluster ? Full repairs are taking days. So among repair
parallel or repair sub range which is faster in the case of say adding a new
node to the cluster?
Sent from my iPhone
> On Oct 19, 2016, at
Hi Sean,
you should be able to do that by running subrange repairs, which is the
only type of repair that wouldn't trigger anticompaction AFAIK.
Beware that now you will have sstables marked as repaired and others marked
as unrepaired, which will never be compacted together.
You might want to flag
Hey,
We are upgrading from cassandra 2.1 to cassandra 2.2.
With cassandra 2.1 we would periodically repair all nodes, using the -pr
flag.
With cassandra 2.2, the same repair takes a very long time, as cassandra
does an anti compaction after the repair. This anti compaction causes
most (all
;>> We are planning to go to repairs inc using the version 2.1.14
>>>>
>>>>
>>>> Saludos
>>>>
>>>> Jean Carlo
>>>>
>>>> "The best way to predict the future is to invent it" Alan Kay
>>>>
gt;>>>>>
>>>>>>
>>>>>> On Fri, Aug 26, 2016 at 2:14 PM, Paulo Motta <
>>>>>> pauloricard...@gmail.com> wrote:
>>>>>>
>>>>>>> > What is the underlying reason?
>>
Basically to minimize the amount of anti-compaction needed, since
>>>>>> with RF=3 you'd need to perform anti-compaction 3 times in a particular
>>>>>> node to get it fully repaired, while without it you can just repair the
>>>>>> full no
;>>
>>>> Thanks for answer!
>>>>
>>>> >It may still be a good idea to manually migrate if you have a sizable
>>>> amount of data
>>>> No, it would be brand new ;-) 3.0 cluster
>>>>
>>>>
>>>>
>&
ired, while without it you can just repair the full
>>>>> node's
>>>>> range in one run. Assuming you run repair frequent enough this will not be
>>>>> a big deal, since you will skip already repaired data in the next round so
>>>>&g
good idea to manually migrate if you have a sizable
>>> amount of data
>>> No, it would be brand new ;-) 3.0 cluster
>>>
>>>
>>>
>>> On Tuesday, June 21, 2016 1:21 AM, Bryan Cheng
>>> wrote:
>>>
>>>
>>> Sorr
manual migration procedure should be
>> UNnecessary"
>>
>> On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng
>> wrote:
>>
>> I don't use 3.x so hopefully someone with operational experience can
>> chime in, however my understanding is: 1) Incremen
so hopefully someone with operational experience can chime
> in, however my understanding is: 1) Incremental repairs should be the
> default in the 3.x release branch and 2) sstable repairedAt is now properly
> set in all sstables as of 2.2.x for standard repairs and therefore manual
&g
t; range in one run. Assuming you run repair frequent enough this will not be
>>>> a big deal, since you will skip already repaired data in the next round so
>>>> you will not have the problem of re-doing work as in non-inc non-pr repair.
>>>>
>&g
ot have the problem of re-doing work as in non-inc non-pr repair.
>>>
>>> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani :
>>>
>>>> Hi Paulo, could you elaborate on 2?
>>>> I didn't know incremental repairs were not compatible with -pr
>>>>
the problem of re-doing work as in non-inc non-pr repair.
>>
>> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani :
>>
>>> Hi Paulo, could you elaborate on 2?
>>> I didn't know incremental repairs were not compatible with -pr
>>> What is the underlying rea
the problem of re-doing work as in non-inc non-pr repair.
>
> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani :
>
>> Hi Paulo, could you elaborate on 2?
>> I didn't know incremental repairs were not compatible with -pr
>> What is the underlying reason?
>>
>&g
ntal repair in all nodes in all DCs
>> sequentially (you should be aware that this will probably generate inter-DC
>> traffic), no need to disable autocompaction or stopping nodes.
>>
>> 2016-08-25 18:27 GMT-03:00 Aleksandr Ivanov :
>>
>>> I’m new in Cassan
n all nodes in all DCs
>> sequentially (you should be aware that this will probably generate inter-DC
>> traffic), no need to disable autocompaction or stopping nodes.
>>
>> 2016-08-25 18:27 GMT-03:00 Aleksandr Ivanov :
>>
>>> I’m new in Cassandra and trying to figure
Hi Paulo, could you elaborate on 2?
I didn't know incremental repairs were not compatible with -pr
What is the underlying reason?
Regards,
Stefano
On Fri, Aug 26, 2016 at 1:25 AM, Paulo Motta
wrote:
> 1. Migration procedure is no longer necessary after CASSANDRA-8004, and
> since yo
; options, so you should run incremental repair in all nodes in all DCs
>>> sequentially (you should be aware that this will probably generate inter-DC
>>> traffic), no need to disable autocompaction or stopping nodes.
>>>
>>> 2016-08-25 18:27 GMT-03:00 Aleksandr Ivano
;> options, so you should run incremental repair in all nodes in all DCs
>> sequentially (you should be aware that this will probably generate inter-DC
>> traffic), no need to disable autocompaction or stopping nodes.
>>
>> 2016-08-25 18:27 GMT-03:00 Aleksandr Ivanov :
>>
&g
gure out how to _start_ using
>> incremental repairs. I have seen article about “Migrating to incremental
>> repairs” but since I didn’t use repairs before at all and I use Cassandra
>> version v3.0.8, then maybe not all steps are needed which are mentioned in
>> Datastax artic
to figure out how to _start_ using
> incremental repairs. I have seen article about “Migrating to incremental
> repairs” but since I didn’t use repairs before at all and I use Cassandra
> version v3.0.8, then maybe not all steps are needed which are mentioned in
> Datastax article.
> S
I’m new in Cassandra and trying to figure out how to _start_ using
incremental repairs. I have seen article about “Migrating to incremental
repairs” but since I didn’t use repairs before at all and I use Cassandra
version v3.0.8, then maybe not all steps are needed which are mentioned in
Datastax
That's what I was thinking. Maybe GC pressure?
Some more details: during anticompaction I have some CFs exploding to 1K
SStables (to be back to ~200 upon completion).
HW specs should be quite good (12 cores/32 GB ram) but, I admit, still
relying on spinning disks, with ~150GB per node.
Current vers
That's pretty low already, but perhaps you should lower to see if it will
improve the dropped mutations during anti-compaction (even if it increases
repair time), otherwise the problem might be somewhere else. Generally
dropped mutations is a signal of cluster overload, so if there's nothing
else w
Not yet. Right now I have it set at 16.
Would halving it more or less double the repair time?
On Tue, Aug 9, 2016 at 7:58 PM, Paulo Motta
wrote:
> Anticompaction throttling can be done by setting the usual
> compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
> setcompactiont
Anticompaction throttling can be done by setting the usual
compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
setcompactionthroughput. Did you try lowering that and checking if that
improves the dropped mutations?
2016-08-09 13:32 GMT-03:00 Stefano Ortolani :
> Hi all,
>
> I
Hi all,
I am running incremental repaird on a weekly basis (can't do it every day
as one single run takes 36 hours), and every time, I have at least one node
dropping mutations as part of the process (this almost always during the
anticompaction phase). Ironically this leads to a system where repa
UNnecessary"
On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng wrote:
I don't use 3.x so hopefully someone with operational experience can chime in,
however my understanding is: 1) Incremental repairs should be the default in
the 3.x release branch and 2) sstable repairedAt is now properly
Sorry, meant to say "therefore manual migration procedure should be
UNnecessary"
On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng wrote:
> I don't use 3.x so hopefully someone with operational experience can chime
> in, however my understanding is: 1) Incremental repairs shoul
I don't use 3.x so hopefully someone with operational experience can chime
in, however my understanding is: 1) Incremental repairs should be the
default in the 3.x release branch and 2) sstable repairedAt is now properly
set in all sstables as of 2.2.x for standard repairs and therefore m
Hi,
assuming I have new, empty Cassandra cluster, how should I start using
incremental repairs? Is incremental repair is default now (as I don't see -inc
option in nodetool) and nothing is needed to use it, or should we perform
migration procedure anyway? And what happens to new column fam
ation steps are not done, the first incremental
repair could take a very long time.
Can anyone clarify this point please?
Did anyone try incremental repairs without the migration procedure with
a sensible amount of data to migrate?
How much longer did it take?
Thank you very much for your hel
As far as I know, docs is quite inconsistent on the matter.
Based on some research here and on IRC, recent versions of Cassandra do no
require anything specific when migrating to incremental repairs but the the
-inc switch even on LCS.
Any confirmation on the matter is more than welcome.
Regards
,
We currently have a 3 nodes Cassandra cluster with RF = 3.
We are using Cassandra 2.1.7.
We would like to start using incremental repairs.
We have some tables using LCS compaction strategy and some others
using STCS.
Here is the procedure written in the documentation:
To migrate to incremental r
Hi,
We currently have a 3 nodes Cassandra cluster with RF = 3.
We are using Cassandra 2.1.7.
We would like to start using incremental repairs.
We have some tables using LCS compaction strategy and some others using
STCS.
Here is the procedure written in the documentation:
To migrate to
On Thu, Jan 8, 2015 at 12:28 AM, Marcus Eriksson wrote:
> But, if you are running 2.1 in production, I would recommend that you wait
> until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
> fixes a bunch of issues with incremental repairs
>
There are other se
Hi Marcus,
thanks a lot for those pointers. Now further testing can begin - and
I'll wait for 2.1.3. Right now on production repair times are really
painful, maybe that will become better. At least I hope so :-)
eed "Repaired at" entries on some sstables already. So if I got this
> right, in 2.1.2+ there is nothing to do to switch to incremental repairs
> (apart from running the repairs themself).
>
> But one thing I see during testing is that there are many sstables, with
> smal
with your
hint I take a look at sstablemetadata from a non "migrated" node and
there are indeed "Repaired at" entries on some sstables already. So if I
got this right, in 2.1.2+ there is nothing to do to switch to
incremental repairs (apart from running the repairs themsel
t you wait
until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
fixes a bunch of issues with incremental repairs
-pr is sufficient, same rules apply as before, if you run -pr you need to
repair every node
/Marcus
On Thu, Jan 8, 2015 at 9:16 AM, Roland Etzenhammer <
r.etzen
Hi,
I am currently trying to migrate my test cluster to incremental repairs.
These are the steps I'm doing on every node:
- touch marker
- nodetool disableautocompation
- nodetool repair
- cassandra stop
- find all *Data*.db files older then marker
- invoke sstablerepairedset on
On Wed, Oct 22, 2014 at 5:47 AM, Marcus Eriksson wrote:
>
> no, if you get a corrupt sstable for example, you will need to run an old
> style repair on that node (without -inc).
>
As a general statement, if you get a corrupt SSTable, restoring it from a
backup (with the node down) should be done
On Wed, Oct 22, 2014 at 2:39 PM, Juho Mäkinen
wrote:
> I'm having problems understanding how incremental repairs are supposed to
> be run.
>
> If I try to do "nodetool repair -inc" cassandra will complain that "It is
> not possible to mix sequential repair a
I'm having problems understanding how incremental repairs are supposed to
be run.
If I try to do "nodetool repair -inc" cassandra will complain that "It is
not possible to mix sequential repair and incremental repairs". However it
seems that running "nodetool repair
89 matches
Mail list logo