OK, thanks to the excellent help of Datastax folks, some of the more severe inconsistencies in my Cassandra cluster were fixed (after a node was down and compactions failed etc).

I'm still having problems as reported in "repairs 0.8.6." thread.

Thing is, why is it so easy for the repair process to break? OK, I admit I'm not sure why nodes are reported as "dead" once in a while, but it's absolutely certain that they simply don't fall off the edge, are knocked out for 10 min or anything like that. Why is there no built-in tolerance/retry mechanism so that a node that may seem silent for a minute can be contacted later, or, better yet, a different node with a relevant replica is contacted?

As was evident from some presentations at Cassandra-NYC yesterday, failed compactions and repairs are a major problem for a number of users. The cluster can quickly become unusable. I think it would be a good idea to build more robustness into these procedures,

Regards

Maxim

Reply via email to