Sure, I was testing using high traffic with about 6K - 7K req/sec reads and
writes combined I added a node and ran repair, at this time the traffic was
stopped and heap was 8G. I saw a lot of flushing and GC activity and
finally it died saying out of memory. So I gave it more memory 12 G and
started the nodes. This sped up the compactions and validations for around
12 hours and now I am back to the flushing and high GC activity at this
point there was no traffic for more than 24 hours.

Again, thanks for the help !


On Wed, Oct 2, 2013 at 10:19 AM, cem <cayiro...@gmail.com> wrote:

> I think 512 is fine. Could you tell more about your traffic
> characteristics?
>
> Cem
>
>
> On Wed, Oct 2, 2013 at 4:32 PM, srmore <comom...@gmail.com> wrote:
>
>> I changed my index_interval from 128 to index_interval: 128 to 512, does
>> it make sense to increase more than this ?
>>
>>
>> On Wed, Oct 2, 2013 at 9:30 AM, cem <cayiro...@gmail.com> wrote:
>>
>>> Have a look to index_interval.
>>>
>>> Cem.
>>>
>>>
>>> On Wed, Oct 2, 2013 at 2:25 PM, srmore <comom...@gmail.com> wrote:
>>>
>>>> The version of Cassandra I am using is 1.0.11, we are migrating to
>>>> 1.2.X though. We had tuned bloom filters (0.1) and AFAIK making it lower
>>>> than this won't matter.
>>>>
>>>> Thanks !
>>>>
>>>>
>>>> On Tue, Oct 1, 2013 at 11:54 PM, Mohit Anchlia 
>>>> <mohitanch...@gmail.com>wrote:
>>>>
>>>>> Which Cassandra version are you on? Essentially heap size is function
>>>>> of number of keys/metadata. In Cassandra 1.2 lot of the metadata like 
>>>>> bloom
>>>>> filters were moved off heap.
>>>>>
>>>>>
>>>>> On Tue, Oct 1, 2013 at 9:34 PM, srmore <comom...@gmail.com> wrote:
>>>>>
>>>>>> Does anyone know what would roughly be the heap size for cassandra
>>>>>> with 1TB of data ? We started with about 200 G and now on one of the 
>>>>>> nodes
>>>>>> we are already on 1 TB. We were using 8G of heap and that served us well 
>>>>>> up
>>>>>> until we reached 700 G where we started seeing failures and nodes 
>>>>>> flipping.
>>>>>>
>>>>>> With 1 TB of data the node refuses to come back due to lack of
>>>>>> memory. needless to say repairs and compactions takes a lot of time. We
>>>>>> upped the heap from 8 G to 12 G and suddenly everything started moving
>>>>>> rapidly i.e. the repair tasks and the compaction tasks. But soon (in 
>>>>>> about
>>>>>> 9-10 hrs) we started seeing the same symptoms as we were seeing with 8 G.
>>>>>>
>>>>>> So my question is how do I determine what is the optimal size of heap
>>>>>> for data around 1 TB ?
>>>>>>
>>>>>> Following are some of my JVM settings
>>>>>>
>>>>>> -Xms8G
>>>>>> -Xmx8G
>>>>>> -Xmn800m
>>>>>> -XX:NewSize=1200M
>>>>>> XX:MaxTenuringThreshold=2
>>>>>> -XX:SurvivorRatio=4
>>>>>>
>>>>>> Thanks !
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to