There were a few streaming bugs fixed between 2.1.13 and 2.1.15 (see
CHANGES.txt for more details), so I'd recommend you to upgrade to 2.1.15 in
order to avoid having those.

2016-09-28 9:08 GMT-03:00 Alain RODRIGUEZ <arodr...@gmail.com>:

> Hi Anubhav,
>
>
>> I’m considering doing subrange repairs (https://github.com/BrianGalle
>> w/cassandra_range_repair/blob/master/src/range_repair.py)
>>
>
> I used this script a lot, and quite successfully.
>
> An other working option that people are using is:
>
> https://github.com/spotify/cassandra-reaper
>
> Alexander, a coworker integrated an existing UI and made it compatible
> with incremental repairs:
>
> Incremental repairs on Reaper: https://github.com/
> adejanovski/cassandra-reaper/tree/inc-repair-that-works
> UI integration with incremental repairs on Reaper: https://github.com/
> adejanovski/cassandra-reaper/tree/inc-repair-support-with-ui
>
> as I’ve heard from folks that incremental repairs simply don’t work even
>> in 3.x (Yeah, that’s a strong statement but I heard that from multiple
>> folks at the Summit).
>>
>
> Alexander also did a talk about repairs at the Summit (including
> incremental repairs) and someone from Netflix also did a good one as well,
> not mentioning incremental repairs but with some benchmarks and tips to run
> repairs. You might want to check one of those (or both):
>
> https://www.youtube.com/playlist?list=PLm-EPIkBI3YoiA-02vufoEj4CgYvIQgIk
>
> I believe they haven't been released by Datastax yet, they probably will
> sometime soon.
>
> Repair is something all the large setups companies are struggling with, I
> mean, Spotify made the Reaper and Netflix a talk about repairs presenting
> the range_repair.py script and much more stuff. But I know there is some
> work going on to improve things.
>
> Meanwhile, given the load per node (600 GB, it's big but not that huge)
> and the number of node (400 is quite a high number of nodes), I would say
> that the hardest part for you would be to handle the scheduling part to
> avoid harming the cluster and make sure all the nodes are repaired. I
> believe Reaper might be a better match in your case as it does that quite
> well from what I heard, I am not really sure.
>
> C*heers,
> -----------------------
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2016-09-26 23:51 GMT+02:00 Anubhav Kale <anubhav.k...@microsoft.com>:
>
>> Hello,
>>
>>
>>
>> We run Cassandra 2.1.13 (don’t have plans to upgrade yet). What is the
>> best way to run repairs at scale (400 nodes, each holding ~600GB) that
>> actually works ?
>>
>>
>>
>> I’m considering doing subrange repairs (https://github.com/BrianGalle
>> w/cassandra_range_repair/blob/master/src/range_repair.py) as I’ve heard
>> from folks that incremental repairs simply don’t work even in 3.x (Yeah,
>> that’s a strong statement but I heard that from multiple folks at the
>> Summit).
>>
>>
>>
>> Any guidance would be greatly appreciated !
>>
>>
>>
>> Thanks,
>>
>> Anubhav
>>
>
>

Reply via email to