I have two data centers, each with the same number of nodes, same hardware
(CPUs, memory), Cassandra version (2.1.6), replication factory, etc. The
only difference it that one data center uses vnodes, and the other doesn't.

The non-vnode DC works fine (and has been for a long time) under production
load: I'm seeing normal CPU and IO load and garbage collection figures. But
the vnode DC is struggling very hard under the same load. It has been set
up recently. The CPU load is very high, due to excessive garbage collection
(>50% of the time is spent collecting).

So it seems that Cassandra simply doesn't have enough memory. I'm trying to
understand if this can be caused by the use of vnodes? Is there an sensible
reason why vnodes would consume more memory than regular nodes? Or does any
of you have the same experience? If not, I might be barking up the wrong
tree here, and I would love to know it before upgrading my servers with
more memory.

Thanks,
Tom

Reply via email to