Hi,

I took the following steps to get a node that refused to repair back under 
control.

WARNING: This resulted in some data loss for us, YMMV with your replication 
factor

* Turn off all row & key caches via cassandra-cli
* Set "disk_access_mode: standard" in cassandra.yaml
* Kill Cassandra on problematic node
* Move the data, commitlog & keycaches directories on problematic node away
* Change "auto_bootstrap: true" in cassandra.yaml on problematic node only
* Start problematic node and wait for bootstrap to finish (see log/nodetool)
* Change back "auto_bootstrap: false" in cassandra.yaml on problematic node only
* Run repair on problematic node, then on all other nodes (rolling, see log)
* Run major compaction on problematic node, then on all other nodes (rolling)
* Revert all the config changes from above (if desired)
* Restart all nodes (rolling)

Cheers,

        T.




On 15/08/11 05:30, Philippe wrote:
No it depends on the consistency level. It's different : for example, QUORUM =
2 for RF=3

Anyway, anyone have an answer to my real issue ?

Thanks
2011/8/14 Stephen Connolly <stephen.alan.conno...@gmail.com
<mailto:stephen.alan.conno...@gmail.com>>

    oh i know you can run rf 3 on a 3 node cluster. more i thought that if you
    have one fail you have less nodes than the rf, so the cluster is at less
    than rf, and writes might be disabled or something like that, while at 4
    you still have met the rf...

    - Stephen

    ---
    Sent from my Android phone, so random spelling mistakes, random nonsense
    words and other nonsense are a direct result of using swype to type on the
    screen

    On 14 Aug 2011 16:08, "Peter Schuller" <peter.schul...@infidyne.com
    <mailto:peter.schul...@infidyne.com>> wrote:



Reply via email to