To the poster, I am sorry to have taken this off topic. Looking forward to
your reply regarding your default heap size, frequency of hard garbage
collection, etc. In any case I am not convinced that heap size/garbage
collection is a root cause of your issue, but it has been so frequently a
problem that I tend to ask that question early on.

Jon, thank you for pointing that out to those who are 100% convinced large
heaps are an anti-pattern that this is not necessarily an anti-pattern ...
I am well aware of that interesting thread, and find it provides a clear
guidance that in most cases, large heaps are an anti-pattern ... except in
fairly rare use cases, only after extensive analysis, and several
iterations of tuning. FYI, I have (both in Hadoop and Cassandra) created
specialized clusters with carefully monitored row sizes and schemas to
leverage the read-mostly options of large heaps.

My experiences may be a corner case, as I tend to work with clusters that
have been up for a while, and sort of grew sideways from the original
expecations.

The analysis is clear that, under certain specific conditions, with
extensive tuning, it just might be possible to run with very large heaps.
But thanks for pointing this out as there is a LOT of information included
there that can help us to deal with certain corner cases where it IS
possible to productively run larger heaps, and the implied anti-patterns.

To the poster, I am sorry to have taken this off topic. Looking forward to
your reply regarding your default heap size, frequency of hard garbage
collection, etc.





On Thu, Apr 2, 2015 at 10:16 AM, Jonathan Haddad <j...@jonhaddad.com> wrote:

> @Daemeon you may want to read through
> https://issues.apache.org/jira/browse/CASSANDRA-8150, there are perfectly
> valid cases for heap > 16gb.
>
> On Thu, Apr 2, 2015 at 10:07 AM daemeon reiydelle <daeme...@gmail.com>
> wrote:
>
>> May not be relevant, but what is the "default" heap size you have
>> deployed. Should be no more than 16gb (and be aware of the impacts of gc on
>> that large size), suggest not smaller than 8-12gb.
>>
>>
>>
>> On Wed, Apr 1, 2015 at 11:28 AM, Anuj Wadehra <anujw_2...@yahoo.co.in>
>> wrote:
>>
>>> Are you writing multiple cf at same time?
>>> Please run nodetool tpstats to make sure that FlushWriter etc doesnt
>>> have high All time blocked counts. A Blocked memtable FlushWriter may
>>> block/drop writes. If thats the case you may need to increase memtable
>>> flush writers..if u have many secondary indexes in cf ..make sure that
>>> memtable flush que size is set at least equal to no of indexes..
>>>
>>> monitoring iostat and gc logs may help..
>>>
>>> Thanks
>>> Anuj Wadehra
>>> ------------------------------
>>>   *From*:"Amlan Roy" <amlan....@cleartrip.com>
>>> *Date*:Wed, 1 Apr, 2015 at 9:27 pm
>>> *Subject*:Re: Frequent timeout issues
>>>
>>> Did not see any exception in cassandra.log and system.log. Monitored
>>> using JConsole. Did not see anything wrong. Do I need to see any specific
>>> info? Doing almost 1000 writes/sec.
>>>
>>> HBase and Cassandra are running on different clusters. For cassandra I
>>> have 6 nodes with 64GB RAM(Heap is at default setting) and 32 cores.
>>>
>>> On 01-Apr-2015, at 8:43 pm, Eric R Medley <emed...@xylocore.com> wrote:
>>>
>>>
>>

Reply via email to