Re: Incremental repairs getting stuck a lot

2021-11-26 Thread James Brown
I filed this as CASSANDRA-17172 On Fri, Nov 26, 2021 at 5:33 PM Dinesh Joshi wrote: > Could you file a jira with the details? > > Dinesh > > On Nov 26, 2021, at 2:40 PM, James Brown wrote: > >  > We're on 4.0.1 and switched to incremental

Re: Incremental repairs getting stuck a lot

2021-11-26 Thread Dinesh Joshi
Could you file a jira with the details? Dinesh > On Nov 26, 2021, at 2:40 PM, James Brown wrote: > >  > We're on 4.0.1 and switched to incremental repairs a couple of months ago. > They work fine about 95% of the time, but once in a while a session will get > stuck and will have to be cancel

Re: Incremental repairs after a migration?

2017-11-11 Thread kurt greaves
you can get away with loading from only one node if you're positive all data is consistent. A repair prior to loading should be enough, but if that doesn't work just load from all nodes. On 11 Nov. 2017 23:15, "Brice Figureau" wrote: > On 10/11/17 21:18, kurt greaves wrote: > > If everything goe

Re: Incremental repairs after a migration?

2017-11-11 Thread Brice Figureau
On 10/11/17 21:18, kurt greaves wrote: > If everything goes smoothly the next incremental should cut it, but a > full repair post load is probably a good idea anyway. Make sure you > sstableload every sstable from every node if you want to keep consistency. If the previous cluster had 3 nodes with

Re: Incremental repairs after a migration?

2017-11-10 Thread kurt greaves
If everything goes smoothly the next incremental should cut it, but a full repair post load is probably a good idea anyway. Make sure you sstableload every sstable from every node if you want to keep consistency.

Re: incremental repairs with -pr flag?

2017-01-13 Thread Bruno Lavoie
Another point, I've done another test on my 5 node cluster. Created a keyspace with replication factor of 5 and inserted some data in it. Run a full repair on each node to make sstable appear on disk. Then run multiple times on each nodes: 1 - nodetool repair 2 - nodetool repair -pr Due to incr

Re: incremental repairs with -pr flag?

2017-01-13 Thread Bruno Lavoie
Thanks for your reply, But can't figure out why it's not recommendend, by DataStax, to run primary-range with incremental repair... It's just doing less work on each repair call on the repaired node. At the end, when all the nodes are repaired using either method, all data is equally consistent?

Re: incremental repairs with -pr flag?

2017-01-11 Thread Paulo Motta
The objective of non-incremental primary-range repair is to avoid redoing work, but with incremental repair anticompaction will segregate repaired data so no extra work is done on the next repair. You should run nodetool repair [ks] [table] in all nodes sequentially. The more often you run, the sm

Re: incremental repairs with -pr flag?

2017-01-10 Thread Bruno Lavoie
On 2016-10-24 13:39 (-0500), Alexander Dejanovski wrote: > Hi Sean, > > In order to mitigate its impact, anticompaction is not fully executed when > incremental repair is run with -pr. What you'll observe is that running > repair on all nodes with -pr will leave sstables marked as unrepaired

Re: Incremental repairs leading to unrepaired data

2016-11-01 Thread kurt Greaves
Can't say I have too many ideas. If load is low during the repair it shouldn't be happening. Your disks aren't overutilised correct? No other processes writing loads of data to them?

Re: Incremental repairs leading to unrepaired data

2016-11-01 Thread Stefano Ortolani
That is not happening anymore since I am repairing a keyspace with much less data (the other one is still there in write-only mode). The command I am using is the most boring (even shed the -pr option so to keep anticompactions to a minimum): nodetool -h localhost repair It's executed sequentially

Re: Incremental repairs leading to unrepaired data

2016-10-31 Thread kurt Greaves
Blowing out to 1k SSTables seems a bit full on. What args are you passing to repair? Kurt Greaves k...@instaclustr.com www.instaclustr.com On 31 October 2016 at 09:49, Stefano Ortolani wrote: > I've collected some more data-points, and I still see dropped > mutations with compaction_throughput_

Re: Incremental repairs leading to unrepaired data

2016-10-31 Thread Stefano Ortolani
I've collected some more data-points, and I still see dropped mutations with compaction_throughput_mb_per_sec set to 8. The only notable thing regarding the current setup is that I have another keyspace (not being repaired though) with really wide rows (100MB per partition), but that shouldn't have

RE: incremental repairs with -pr flag?

2016-10-25 Thread Sean Bridges
Thanks. Sean From: Alexander Dejanovski [a...@thelastpickle.com] Sent: Monday, October 24, 2016 10:39 AM To: user@cassandra.apache.org Subject: Re: incremental repairs with -pr flag? Hi Sean, In order to mitigate its impact, anticompaction is not fully executed

Re: incremental repairs with -pr flag?

2016-10-24 Thread Alexander Dejanovski
Hi Sean, In order to mitigate its impact, anticompaction is not fully executed when incremental repair is run with -pr. What you'll observe is that running repair on all nodes with -pr will leave sstables marked as unrepaired on all of them. Then, if you think about it you realize it's no big dea

Re: Incremental repairs in 3.0

2016-09-13 Thread Jean Carlo
Hi Paulo! Sorry there was something I was doing wrong. Now I can see that the value of Repaired At changes even if there is no streaming. I am using cassandra 2.1.14 and the comand was nodetool repair -inc -par. Anyway good to know this: > If you're using subrange repair, please note that this

Re: Incremental repairs in 3.0

2016-09-12 Thread Paulo Motta
> I truncate a table lcs, Then I inserted one line and I used nodetool flush to have all the sstables. Using a RF 3 I ran a repair -inc directly and I observed that the value of Reaired At was equal 0. Were you able to troubleshoot this? The value of repairedAt should be mutated even when there is

Re: Incremental repairs in 3.0

2016-09-07 Thread Jean Carlo
Well I did an small test on my cluster and I didn't get the results I was expecting. I truncate a table lcs, Then I inserted one line and I used nodetool flush to have all the sstables. Using a RF 3 I ran a repair -inc directly and I observed that the value of Reaired At was equal 0. So I start t

Re: Incremental repairs in 3.0

2016-09-06 Thread Bryan Cheng
HI Jean, This blog post is a pretty good resource: http://www.datastax.com/dev/blog/anticompaction-in-cassandra-2-1 I believe in 2.1.x you don't need to do the manual migration procedure, but if you run regular repairs and the data set under LCS is fairly large (what this means will probably depe

Re: Incremental repairs in 3.0

2016-09-06 Thread Jean Carlo
Hi @Bryan When you said "sizable amount of data" you meant a huge amount of data right? Our big table is in LCS and if we use the migration process we will need to run a repair seq over this table for a long time. We are planning to go to repairs inc using the version 2.1.14 Saludos Jean Carlo

Re: Incremental repairs leading to unrepaired data

2016-08-10 Thread Stefano Ortolani
That's what I was thinking. Maybe GC pressure? Some more details: during anticompaction I have some CFs exploding to 1K SStables (to be back to ~200 upon completion). HW specs should be quite good (12 cores/32 GB ram) but, I admit, still relying on spinning disks, with ~150GB per node. Current vers

Re: Incremental repairs leading to unrepaired data

2016-08-10 Thread Paulo Motta
That's pretty low already, but perhaps you should lower to see if it will improve the dropped mutations during anti-compaction (even if it increases repair time), otherwise the problem might be somewhere else. Generally dropped mutations is a signal of cluster overload, so if there's nothing else w

Re: Incremental repairs leading to unrepaired data

2016-08-10 Thread Stefano Ortolani
Not yet. Right now I have it set at 16. Would halving it more or less double the repair time? On Tue, Aug 9, 2016 at 7:58 PM, Paulo Motta wrote: > Anticompaction throttling can be done by setting the usual > compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool > setcompactiont

Re: Incremental repairs leading to unrepaired data

2016-08-09 Thread Paulo Motta
Anticompaction throttling can be done by setting the usual compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool setcompactionthroughput. Did you try lowering that and checking if that improves the dropped mutations? 2016-08-09 13:32 GMT-03:00 Stefano Ortolani : > Hi all, > > I

Re: Incremental repairs in 3.0

2016-06-21 Thread Vlad
Thanks for answer! >It may still be a good idea to manually migrate if you have a sizable amount >of dataNo, it would be brand new ;-) 3.0 cluster On Tuesday, June 21, 2016 1:21 AM, Bryan Cheng wrote: Sorry, meant to say "therefore manual migration procedure should be UNnecessary"

Re: Incremental repairs in 3.0

2016-06-20 Thread Bryan Cheng
Sorry, meant to say "therefore manual migration procedure should be UNnecessary" On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng wrote: > I don't use 3.x so hopefully someone with operational experience can chime > in, however my understanding is: 1) Incremental repairs should be the > default in t

Re: Incremental repairs in 3.0

2016-06-20 Thread Bryan Cheng
I don't use 3.x so hopefully someone with operational experience can chime in, however my understanding is: 1) Incremental repairs should be the default in the 3.x release branch and 2) sstable repairedAt is now properly set in all sstables as of 2.2.x for standard repairs and therefore manual migr

Re: incremental repairs

2015-01-08 Thread Robert Coli
On Thu, Jan 8, 2015 at 12:28 AM, Marcus Eriksson wrote: > But, if you are running 2.1 in production, I would recommend that you wait > until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316 > fixes a bunch of issues with incremental repairs > There are other serious issues with

Re: incremental repairs

2015-01-08 Thread Roland Etzenhammer
Hi Marcus, thanks a lot for those pointers. Now further testing can begin - and I'll wait for 2.1.3. Right now on production repair times are really painful, maybe that will become better. At least I hope so :-)

Re: incremental repairs

2015-01-08 Thread Marcus Eriksson
Yes, you should reenable autocompaction /Marcus On Thu, Jan 8, 2015 at 10:33 AM, Roland Etzenhammer < r.etzenham...@t-online.de> wrote: > Hi Marcus, > > thanks for that quick reply. I did also look at: > > http://www.datastax.com/documentation/cassandra/2.1/ > cassandra/operations/ops_repair_nod

Re: incremental repairs

2015-01-08 Thread Roland Etzenhammer
Hi Marcus, thanks for that quick reply. I did also look at: http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_repair_nodes_c.html which describes the same process, it's 2.1.x, so I see that 2.1.2+ is not covered there. I did upgrade my testcluster to 2.1.2 and with y

Re: incremental repairs

2015-01-08 Thread Marcus Eriksson
If you are on 2.1.2+ (or using STCS) you don't those steps (should probably update the blog post). Now we keep separate levelings for the repaired/unrepaired data and move the sstables over after the first incremental repair But, if you are running 2.1 in production, I would recommend that you wa