This is a very antagonistic use case for Cassandra :P I assume you're
familiar with Cassandra and deletes? (eg.
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html,
http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_about_deletes_c.html
)

That being said, are you giving enough time for your tables to flush to
disk? Deletes generate markers which can and will consume memory until they
have a chance to be flushed, after which they will impact query time and
performance (but should relieve memory pressure). If you're saturating the
capability of your nodes your tables will have difficulty flushing. See
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_memtable_thruput_c.html
.

This could also be a heap/memory configuration issue as well or a GC tuning
issue (although unlikely if you've left those at default)

--Bryan


On Mon, Jul 3, 2017 at 7:51 AM, Karthick V <karthick...@zohocorp.com> wrote:

> Hi,
>
>       Recently In my test Cluster I faced a outrageous GC activity which
> made the Node unreachable inside the cluster itself.
>
> Scenario :
>       In a Partition of 5Million rows we read first 500 (by giving the
> starting range) and delete the same 500 again.The same has been done
> recursively by changing the Start range alone. Initially I didn't see any
> difference in the query performance ( upto 50,000) but later I observed a
> significant increase in performance when reached about a 3.3Million the
> read request failed and the node went unreachable. After analysing my GC
> logs it is clear that 99% of my old-memory space is occupied and there are
> no more space for allocation it caused the machine stall.
>        here my is doubt is that does all the deleted 3.3Million row will
> be loaded in my on-heap memory? if not what will be object that occupying
> those memory ?.
>
> PS : I am using C* 2.1.13 in cluster.
>
>
>
>
>

Reply via email to