Hello Anuj, Repairing a range with down replicas may be valid if there is still QUORUM up replicas and using at least QUORUM for writes. My understanding is that it was disabled as default behavior on CASSANDRA-2290 to avoid misuse/confusion, and its not advisable to remain with a down node for a long time without replacing it (with risk of not being able to achieve consistency if another node goes down).
Issue https://issues.apache.org/jira/browse/CASSANDRA-10446 was created to allow repairing ranges with down replicas with a special flag (--force). If you're interested please add comments there and/or propose a patch. Thanks, Paulo 2016-01-17 1:33 GMT-03:00 Anuj Wadehra <anujw_2...@yahoo.co.in>: > Hi > > We are on 2.0.14,RF=3 in a 3 node cluster. We use repair -pr . Recently, > we observed that repair -pr for all nodes fails if a node is down. Then I > found the JIRA > https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-2290 > where an intentional decision was taken to abort the repair if a replica > is down. > I need to understand the reasoning behind aborting the repair instead of > proceeding with available replicas. > > I have following concerns with the approach: > We say that we have a fault tolerant Cassandra system such that we can > afford single node failure because RF=3 and we read/write at QUORUM.But > when a node goes down and we are not sure how much time will be needed to > restore the node, entire system health is in question as gc_grace_period is > approaching and we are not able to run repair -pr on any of the nodes. > > Then there is a dilemma: > > Whether to remove the faulty node well before gc grace period so that we > get enough time to save data by repairing other two nodes? > > This may cause massive streaming which may be unnecessary if we are able > to bring back the faulty node up before gc grace period. > > OR > > Wait and hope that the issue will be resolved before gc grace time and we > will have some buffer to run repair -pr on all nodes. > > OR > > Increase the gc grace period temporarily. Then we should have capacity > planning to accomodate the extra storage needed for extra gc grace that may > be needed in case of node failure scenarios. > > Besides knowing the reasoning behind the decision taken in CASSANDRA-2290, > I need to understand the recommeded approach for maintaing a fault tolerant > system which can handle node failures such that repair can be run smoothly > and system health is maintained at all times. > > Thanks > Anuj > Sent from Yahoo Mail on Android > <https://overview.mail.yahoo.com/mobile/?.src=Android> >