Re: Marking each node down before rolling restart

2010-09-29 Thread Aaron Morton
I just ran nodetool drain in a 3 node cluster that was not serving any requests, the other nodes picked up the change in about 10 seconds.On the node I drained  INFO [RMI TCP Connection(39)-192.168.34.31] 2010-09-30 15:18:03,281 StorageService.java (line 474) Starting drain process INFO [RMI TCP Co

Re: Marking each node down before rolling restart

2010-09-29 Thread Justin Sanders
It seems to be about 15 seconds after killing a node before the other nodes report it being down. We are running a 9 node cluster with RF=3, all reads and writes at quorum. I was making the same assumption you are, that an operation would complete fine at quorum with only one node down since the

Re: Marking each node down before rolling restart

2010-09-29 Thread Aaron Morton
Ah, that was not exactly what you were after. I do not know how long it takes gossip / failure detector to detect a down node. In your case what is the CF you're using for reads and what is your RF? The hope would be that taking one node down at a time would leave enough server running to serve the

Re: Marking each node down before rolling restart

2010-09-29 Thread Aaron Morton
Try nodetool drain Flushes all memtables for a node and causes the node to stop accepting write operations. Read operations will continue to work. This is typically used before upgrading a node to a new version of Cassandra.http://www.riptano.com/docs/0.6.5/utils/nodetoolAaronOn 30 Sep, 2010,at 10:

Marking each node down before rolling restart

2010-09-29 Thread Justin Sanders
I looked through the documentation but couldn't find anything. I was wondering if there is a way to manually mark a node "down" in the cluster instead of killing the cassandra process and letting the other nodes figure out the node is no longer up. The reason I ask is because we are having an iss