Ah, that was not exactly what you were after. I do not know how long it takes gossip / failure detector to detect a down node.
In your case what is the CF you're using for reads and what is your RF? The hope would be that taking one node down at a time would leave enough server running to serve the request. AFAIK the coordinator will make a read request to the first node responsible for the row, and only ask for a digest from the others. So there may be a case where it has to timeout reading from the first node before asking for the full data from the others.
A hack solution may be to reduce the rpc_timeout_in_ms
May need some adult supervision to answer this one.
Aaron
On 30 Sep, 2010,at 10:45 AM, Aaron Morton <aa...@thelastpickle.com> wrote:
Try nodetool drainFlushes all memtables for a node and causes the node to stop accepting write operations. Read operations will continue to work. This is typically used before upgrading a node to a new version of Cassandra.
On 30 Sep, 2010,at 10:15 AM, Justin Sanders <jus...@justinjas.com> wrote:I looked through the documentation but couldn't find anything. I was wondering if there is a way to manually mark a node "down" in the cluster instead of killing the cassandra process and letting the other nodes figure out the node is no longer up.The reason I ask is because we are having an issue when we perform rolling restarts on the cluster. Basically read requests that come in on other nodes will block while they are waiting on the node that was just killed to be marked down. Before they realize the node is offline they will throw a TimedOutException.If I could mark the node being down ahead of time this timeout period could be avoided. Any help is appreciated.Justin