Did you upgrade your existing sstables after lowering the value?

BTW: If you have tried all other avenues then my suggestion is to increase
your heap to 12GB and ParNew to 3GB. Test it out.

On Wed, Oct 2, 2013 at 5:25 AM, srmore <comom...@gmail.com> wrote:

> The version of Cassandra I am using is 1.0.11, we are migrating to 1.2.X
> though. We had tuned bloom filters (0.1) and AFAIK making it lower than
> this won't matter.
>
> Thanks !
>
>
> On Tue, Oct 1, 2013 at 11:54 PM, Mohit Anchlia <mohitanch...@gmail.com>wrote:
>
>> Which Cassandra version are you on? Essentially heap size is function of
>> number of keys/metadata. In Cassandra 1.2 lot of the metadata like bloom
>> filters were moved off heap.
>>
>>
>> On Tue, Oct 1, 2013 at 9:34 PM, srmore <comom...@gmail.com> wrote:
>>
>>> Does anyone know what would roughly be the heap size for cassandra with
>>> 1TB of data ? We started with about 200 G and now on one of the nodes we
>>> are already on 1 TB. We were using 8G of heap and that served us well up
>>> until we reached 700 G where we started seeing failures and nodes flipping.
>>>
>>> With 1 TB of data the node refuses to come back due to lack of memory.
>>> needless to say repairs and compactions takes a lot of time. We upped the
>>> heap from 8 G to 12 G and suddenly everything started moving rapidly i.e.
>>> the repair tasks and the compaction tasks. But soon (in about 9-10 hrs) we
>>> started seeing the same symptoms as we were seeing with 8 G.
>>>
>>> So my question is how do I determine what is the optimal size of heap
>>> for data around 1 TB ?
>>>
>>> Following are some of my JVM settings
>>>
>>> -Xms8G
>>> -Xmx8G
>>> -Xmn800m
>>> -XX:NewSize=1200M
>>> XX:MaxTenuringThreshold=2
>>> -XX:SurvivorRatio=4
>>>
>>> Thanks !
>>>
>>
>>
>

Reply via email to