OK so good news, I'm running with the patched jar file in my cluster and
haven't seen any issues. The bloom filter off-heap memory usage is between
1.5GB and 2GB per node, which is much more in-line with what I'm expecting!
(thumbsup)
On Mon, Mar 14, 2016 at 9:42 AM, Ad
affected nodes with the fixed jar.
>
> 2016-03-13 19:51 GMT-03:00 Adam Plumb :
>
>> So it's looking like the bloom filter off heap memory usage is ramping up
>> and up until the OOM killer kills the java process. I relaunched on
>> instances with 60GB of memory a
node will start using more and more until it is also killed.
Is this the expected behavior? It doesn't seem ideal to me. Is there
anything obvious that I'm doing wrong?
On Fri, Mar 11, 2016 at 11:31 AM, Adam Plumb wrote:
> Here is the creation syntax for the entire schema. The xyz
{'class': 'LZ4Compressor'};
> CREATE INDEX secondary_id_index_def ON abc.def (secondary_id);
On Fri, Mar 11, 2016 at 11:24 AM, Jack Krupansky
wrote:
> What is your schema and data like - in particular, how wide are your
> partitions (number of rows and typical row size)?
&g
is box?
>
> If so, does it have a memory leak?
>
> all the best,
>
> Sebastián
> On Mar 11, 2016 11:14 AM, "Adam Plumb" wrote:
>
>> I've got a new cluster of 18 nodes running Cassandra 3.4 that I just
>> launched and loaded data into yesterday
I've got a new cluster of 18 nodes running Cassandra 3.4 that I just
launched and loaded data into yesterday (roughly 2TB of total storage) and
am seeing runaway memory usage. These nodes are EC2 c3.4xlarges with 30GB
RAM and the heap size is set to 8G with a new heap size of 1.6G.
Last night I f