Sorry, I didn't mean to high jack the thread. But I have seen similar
issues and ignore it always because it wasn't really causing any issues.
But I am really curious on how to find these.

On Mon, Jun 11, 2018 at 9:45 AM, Nitan Kainth <nitankai...@gmail.com> wrote:

> thanks Martin.
>
> 99 percentile of all tables are even size. Max is always higher in all
> tables.
>
> The question is, How do I identify, which table is throwing this "Maximum
> memory usage reached (512.000MiB)" usage message?
>
> On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura <m.mac...@gmail.com> wrote:
>
>> Hi,
>> we've had this issue with large partitions (100 MB and more).  Use
>> nodetool tablehistograms to find partition sizes for each table.
>>
>> If you have enough heap space to spare, try increasing this parameter:
>> file_cache_size_in_mb: 512
>>
>> There's also the following parameter, but I did not test the impact yet:
>> buffer_pool_use_heap_if_exhausted: true
>>
>>
>> Regards,
>>
>> Martin
>>
>>
>> On Tue, Jun 5, 2018 at 3:54 PM, learner dba
>> <cassandra...@yahoo.com.invalid> wrote:
>> > Hi,
>> >
>> > We see this message often, cluster has multiple keyspaces and column
>> > families;
>> > How do I know which CF is causing this?
>> > Or it could be something else?
>> > Do we need to worry about this message?
>> >
>> > INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983
>> NoSpamLogger.java:91
>> > - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
>> > 1.000MiB
>> >
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>

Reply via email to