Hello Rahul,

I would request Hossein to correct me if I am wrong. Below is how it works

How will a application/database read something from the disk
A request comes in for read----> the application code internally would be
invoking upon system calls-----> these kernel level system calls will
schedule a job with io-scheduler------> the data is then read and  returned
by the device drivers-----> this fetched data from the disk is a
accumulated in a memory location ( file buffer) until the entire read
operation is complete-----> then i guess the data is uncompressed---->
processed inside jvm as JAVA objects-----> handed over to the application
logic to transmit it over the network interface.

This is my understanding of file_cache_size_in_mb. Basically caching disk
data onto the file system cache.
The alert you are getting is an INFO level log.
I would recommend try understanding why is it that this cache is filling up
fast. Increasing the cache size is a solution but as i remember there are
some impact if this is increased. I faced a similar issue and increased the
cache size. Eventually it happened that the increased size started falling
short.

You have the right question of how cache is being recycled. If you find an
answer do post the same. But that is something Cassandra doesn't have a
control on ( that is what i understand) .
 Investigating your reads,if a lot of data is being read to satisfy few
queries, might be another way to start troubleshooting

Thanks,
Rajsekhar








On Mon, 2 Dec, 2019, 8:18 PM Rahul Reddy, <rahulreddy1...@gmail.com> wrote:

> Thanks Hossein,
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <ghiyasim...@gmail.com>
> wrote:
>
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it again.
>>
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>>
>>
>>
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <rahulreddy1...@gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>

Reply via email to