Hi Joaquin,

We have inserts going into a tracking table. Tracking table is a simple table 
[PRIMARY KEY (comid, status_timestamp) ] with a few tracking attributes and 
sorted by status_timestamp. From a volume perspective it is not a whole lot.


Thanks,
Shravan
________________________________
From: Joaquin Casares <joaq...@thelastpickle.com>
Sent: Friday, March 3, 2017 11:34:58 AM
To: user@cassandra.apache.org
Subject: Re: OOM on Apache Cassandra on 30 Plus node at the same time

Hello Shravan,

Typically asynchronous requests are recommended over batch statements since 
batch statements will cause more work on the coordinator node while individual 
requests, when using a TokenAwarePolicy, will hit a specific coordinator, 
perform a local disk seek, and return the requested information.

The only times that using batch statements are ideal is if writing to the same 
partition key, even if it's across multiple tables when using the same hashing 
algorithm (like murmur3).

Could you provide a bit of insight into what the batch statement was trying to 
accomplish and how many child statements were bundled up within that batch?

Cheers,

Joaquin

Joaquin Casares
Consultant
Austin, TX

Apache Cassandra Consulting
http://www.thelastpickle.com

On Fri, Mar 3, 2017 at 11:18 AM, Shravan Ch 
<chall...@outlook.com<mailto:chall...@outlook.com>> wrote:
Hello,

More than 30 plus Cassandra servers in the primary DC went down OOM exception 
below. What puzzles me is the scale at which it happened (at the same minute). 
I will share some more details below.

System Log: http://pastebin.com/iPeYrWVR
GC Log: http://pastebin.com/CzNNGs0r

<http://pastebin.com/CzNNGs0r>During the OOM I saw lot of WARNings like the 
below (these were there for quite sometime may be weeks)
WARN  [SharedPool-Worker-81] 2017-03-01 19:55:41,209 BatchStatement.java:252 - 
Batch of prepared statements for [keyspace.table] is of size 225455, exceeding 
specified threshold of 65536 by 159919.

Environment:
We are using ApacheCassandra-2.1.9 on Multi DC cluster. Primary DC (more C* 
nodes on SSD and apps run here)  and secondary DC (geographically remote and 
more like a DR to primary) on SAS drives.
Cassandra config:

Java 1.8.0_65
Garbage Collector: G1GC
memtable_allocation_type: offheap_objects

Post this OOM I am seeing huge hints pile up on majority of the nodes and the 
pending hints keep going up. I have increased HintedHandoff CoreThreads to 6 
but that did not help (I admit that I tried this on one node to try).

nodetool compactionstats -H
pending tasks: 3
compaction type            keyspace                          table   completed  
    total    unit   progress
        Compaction              system                          hints     28.5 
GB   92.38 GB   bytes     30.85%


Appreciate your inputs here.

Thanks,
Shravan

Reply via email to