We (The Last Pickle) forked reaper a while ago and added support for 3.0.

https://github.com/thelastpickle/cassandra-reaper 
<https://github.com/thelastpickle/cassandra-reaper>

We set up a mailing list here for Reaper specific questions: 
https://groups.google.com/forum/#!forum/tlp-apache-cassandra-reaper-users 
<https://groups.google.com/forum/#!forum/tlp-apache-cassandra-reaper-users>

Jon

> On Apr 21, 2017, at 1:11 PM, eugene miretsky <eugene.miret...@gmail.com> 
> wrote:
> 
> The Spotify repo (https://github.com/spotify/cassandra-reaper 
> <https://github.com/spotify/cassandra-reaper>) seems to not be maintained 
> anymore. I'm not sure if they even support Cassandra 3.0 
> (https://github.com/spotify/cassandra-reaper/issues/140 
> <https://github.com/spotify/cassandra-reaper/issues/140>). 
> 
> Regardless, in Cassandra 3.0 repairs are
> 1) Incremental, which means that the same SSTables will not be repaired 
> twice. 
> 2) Parallel, which means that when you call repair, all nodes repair at the 
> same time. 
> 
> I suppose that in the worst case, calling repair from X nodes could trigger X 
> repair processes (that will each trigger a Markel tree building on each 
> node). But I would assume that Cassandra prevents this by making sure that 
> there is only one repair process running per node. 
> 
> 
> 
> On Fri, Apr 21, 2017 at 2:43 AM, Oskar Kjellin <oskar.kjel...@gmail.com 
> <mailto:oskar.kjel...@gmail.com>> wrote:
> It will create more overhead on your cluster. Consider using something like 
> reaper to manage.
> 
> > On 21 Apr 2017, at 00:57, eugene miretsky <eugene.miret...@gmail.com 
> > <mailto:eugene.miret...@gmail.com>> wrote:
> >
> > In Cassandra 3.0 the default nodetool repair behaviour is incremental and 
> > parallel.
> > Is there a downside to triggering repair from multiple nodes at the same 
> > time?
> >
> > Basically, instead of scheduling a cron job on one node to run repair, I 
> > want to schedule the job on every node (this way, I don't have to worry 
> > about repair if the one node goes down). Alternatively, I could build a 
> > smarter solution for HA repair jobs, but that seems like an overkill.
> 

Reply via email to