Total 100G data per node.

On Fri, Jun 28, 2013 at 2:14 PM, sulong <sulong1...@gmail.com> wrote:

> aaron, thanks for your reply. Yes, I do use the Leveled compactions
> strategy, and the SSTable size is 10M. If it happens again, I will try to
> enlarge the sstable size.
>
> I just wonder why cassandra doesn't limit the SSTableReader's total memory
> usage when compacting. Lots of memory are consumed by the SSTableReader's
> cache. Why not clear these cache first at the beginning of compaction?
>
>
> On Fri, Jun 28, 2013 at 1:14 PM, aaron morton <aa...@thelastpickle.com>wrote:
>
>> Are you running the Levelled compactions strategy ?
>> If so what is the max SSTable size and what is the total data per node?
>>
>>  If you are running it try using a larger SSTable size like 32MB
>>
>> Cheers
>>
>>    -----------------
>> Aaron Morton
>> Freelance Cassandra Consultant
>> New Zealand
>>
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 27/06/2013, at 2:02 PM, sulong <sulong1...@gmail.com> wrote:
>>
>> According to  the OpsCenter records, yes,  the compaction was running
>> then, 8.5mb /s
>>
>>
>> On Thu, Jun 27, 2013 at 9:54 AM, sulong <sulong1...@gmail.com> wrote:
>>
>>> version: 1.2.2
>>> cluster read requests 800/s, write request 22/s
>>> Sorrry, I don't know whether  the compaction was running then.
>>>
>>>
>>> On Thu, Jun 27, 2013 at 1:02 AM, Robert Coli <rc...@eventbrite.com>wrote:
>>>
>>>> On Tue, Jun 25, 2013 at 10:13 PM, sulong <sulong1...@gmail.com> wrote:
>>>> > I have 4 nodes cassandra cluster. Every node has 32G memory, and the
>>>> > cassandra jvm uses 8G. The cluster is suffering from gc. Looks like
>>>> > CompactionExecutor thread holds too many SSTableReader. See the
>>>> attachement.
>>>>
>>>> What version of Cassandra?
>>>> What workload?
>>>> Is compaction actually running?
>>>>
>>>> =Rob
>>>>
>>>
>>>
>>
>>
>

Reply via email to