Thanks! We retrieved all the ranges and started running repair on them. We
ran through all of them but found one single range which brought the ENTIRE
cluster down. All of the other ranges ran quickly and smoothly. This one
problematic range reliably brings it down every time we try to run repair
on it. Any thoughts on why one specific range would be a troublemaker?


On Tue, Jul 1, 2014 at 11:44 AM, Ken Hancock <ken.hanc...@schange.com>
wrote:

> I also expanded on a script originally written by Matt Stump @ Datastax.
> The readme has the reasoning behind requiring sub-range repairs.
>
> https://github.com/hancockks/cassandra_range_repair
>
>
>
>
> On Mon, Jun 30, 2014 at 10:20 PM, Phil Burress <philburress...@gmail.com>
> wrote:
>
>> @Paulo, this is very cool! Thanks very much for the link!
>>
>>
>> On Mon, Jun 30, 2014 at 9:37 PM, Paulo Ricardo Motta Gomes <
>> paulo.mo...@chaordicsystems.com> wrote:
>>
>>> If you find it useful, I created a tool where you input the node IP,
>>> keyspace, column family, and optionally the number of partitions (default:
>>> 32K), and it outputs the list of subranges for that node, CF, partition
>>> size: https://github.com/pauloricardomg/cassandra-list-subranges
>>>
>>> So you can basically iterate over the output of that and do subrange
>>> repair for each node and cf, maybe in parallel. :)
>>>
>>>
>>> On Mon, Jun 30, 2014 at 10:26 PM, Phil Burress <philburress...@gmail.com
>>> > wrote:
>>>
>>>> One last question. Any tips on scripting a subrange repair?
>>>>
>>>>
>>>> On Mon, Jun 30, 2014 at 7:12 PM, Phil Burress <philburress...@gmail.com
>>>> > wrote:
>>>>
>>>>> We are running repair -pr. We've tried subrange manually and that
>>>>> seems to work ok. I guess we'll go with that going forward. Thanks for all
>>>>> the info!
>>>>>
>>>>>
>>>>> On Mon, Jun 30, 2014 at 6:52 PM, Jaydeep Chovatia <
>>>>> chovatia.jayd...@gmail.com> wrote:
>>>>>
>>>>>> Are you running full repair or on subset? If you are running full
>>>>>> repair then try running on sub-set of ranges which means less data to 
>>>>>> worry
>>>>>> during repair and that would help JAVA heap in general. You will have to 
>>>>>> do
>>>>>> multiple iterations to complete entire range but at-least it will work.
>>>>>>
>>>>>> -jaydeep
>>>>>>
>>>>>>
>>>>>> On Mon, Jun 30, 2014 at 3:22 PM, Robert Coli <rc...@eventbrite.com>
>>>>>> wrote:
>>>>>>
>>>>>>> On Mon, Jun 30, 2014 at 3:08 PM, Yuki Morishita <mor.y...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
>>>>>>>>
>>>>>>>
>>>>>>> As a general meta comment, the process by which operationally
>>>>>>> important defaults change in Cassandra seems ad-hoc and sub-optimal.
>>>>>>>
>>>>>>> For to record, my view was that this change, which makes repair even
>>>>>>> slower than it previously was, was probably overly optimistic.
>>>>>>>
>>>>>>> It's also weird in that it changes default behavior which has been
>>>>>>> unchanged since the start of Cassandra time and is therefore probably
>>>>>>> automated against. Why was it so critically important to switch to 
>>>>>>> snapshot
>>>>>>> repair that it needed to be shotgunned as a new default in 2.0.2?
>>>>>>>
>>>>>>> =Rob
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Paulo Motta*
>>>
>>> Chaordic | *Platform*
>>> *www.chaordic.com.br <http://www.chaordic.com.br/>*
>>> +55 48 3232.3200
>>>
>>
>>
>
>
> --
> *Ken Hancock *| System Architect, Advanced Advertising
> SeaChange International
> 50 Nagog Park
> Acton, Massachusetts 01720
> ken.hanc...@schange.com | www.schange.com | NASDAQ:SEAC
> <http://www.schange.com/en-US/Company/InvestorRelations.aspx>
> Office: +1 (978) 889-3329 | [image: Google Talk:] ken.hanc...@schange.com
>  | [image: Skype:]hancockks | [image: Yahoo IM:]hancockks [image:
> LinkedIn] <http://www.linkedin.com/in/kenhancock>
>
> [image: SeaChange International]
>  <http://www.schange.com/>This e-mail and any attachments may contain
> information which is SeaChange International confidential. The information
> enclosed is intended only for the addressees herein and may not be copied
> or forwarded without permission from SeaChange International.
>

Reply via email to