Hi Yuji,
Thanks for your reply.
That's what I don't understand. Since the writes are in LOCAL_QUORUM, even
if a node fails, there should be enough replica to satisfy the request,
shouldn't it?
Otherwise, the whole idea behind no single point of failure is only
partially true? Or is there something
You don't have a viable solution because you are not making a snapshot as a
starting point. After a while you will have a lot of backup data. Using
the backups to get your cluster to a given state will involve copying a
very large amount of backup data, possibility more than the capacity of
your c
Setup is not on cloud. We have few nodes in one DC(1) and same number of
nodes in other DC(2). We have dedicated firewall in-front on nodes.
Read and write happen with local quorum so those dont get affected but
hints get accumulated from one DC to other DC for replications. Hints are
also gett
I've heard enough stories of firewall issues that I'm willing to bet it's
the problem, if it's sitting between the nodes.
On Sun, Jan 15, 2017 at 9:32 AM Anshu Vajpayee
wrote:
> Setup is not on cloud. We have few nodes in one DC(1) and same number
> of nodes in other DC(2). We have dedicated f
Hi Anubhav,
This happened to us as well, on all nodes in the DC. We found that after
performing removenode, all other nodes suddenly started to do a lot of
compactions that increased CPU.
To mitigate that, we used nodetool disableautocompaction before removing
the node. Then, after removal, we slo