.
From: Shishir Kumar
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, December 4, 2019 at 8:04 AM
To: "user@cassandra.apache.org"
Subject: Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk
of 1.000MiB"
Message from External Sender
Corr
>>>
>>>>
>>>>
>>>> One thing you’ll find out pretty quickly. There are a lot of knobs you
>>>> can turn with C*, too many to allow for easy answers on what you should
>>>> do. Figure out what your throughput and latency
ould
>>> do. Figure out what your throughput and latency SLAs are, and you’ll know
>>> when to stop tuning. Otherwise you’ll discover that it’s a rabbit hole you
>>> can dive into and not come out of for weeks.
>>>
>>>
>>>
>>>
>>&
makes it hard to
control the variables enough for my satisfaction. It can feel like a game of
empirical whack-a-mole.
From: Shishir Kumar
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, December 3, 2019 at 9:23 AM
To: "user@cassandra.apache.org"
Subject: Re: &
and latency SLAs are, and you’ll know
>> when to stop tuning. Otherwise you’ll discover that it’s a rabbit hole you
>> can dive into and not come out of for weeks.
>>
>>
>>
>>
>>
>> *From: *Hossein Ghiyasi Mehr
>> *Reply-To: *&
SLAs are, and you’ll know
> when to stop tuning. Otherwise you’ll discover that it’s a rabbit hole you
> can dive into and not come out of for weeks.
>
>
>
>
>
> *From: *Hossein Ghiyasi Mehr
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Monday, Decem
ot;user@cassandra.apache.org"
> *Date: *Monday, December 2, 2019 at 10:35 AM
> *To: *"user@cassandra.apache.org"
> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB"
>
>
>
> *Message from External Sender*
dive into
and not come out of for weeks.
From: Hossein Ghiyasi Mehr
Reply-To: "user@cassandra.apache.org"
Date: Monday, December 2, 2019 at 10:35 AM
To: "user@cassandra.apache.org"
Subject: Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk
of 1.000Mi
It may be helpful:
https://thelastpickle.com/blog/2018/08/08/compression_performance.html
It's complex. Simple explanation, cassandra keeps sstables in memory based
on chunk size and sstable parts. It manage loading new sstables to memory
based on requests on different sstables correctly . You shou
Hello Rahul,
I would request Hossein to correct me if I am wrong. Below is how it works
How will a application/database read something from the disk
A request comes in for read> the application code internally would be
invoking upon system calls-> these kernel level system calls will
sche
Thanks Hossein,
How does the chunks are moved out of memory (LRU?) if it want to make room
for new requests to get chunks?if it has mechanism to clear chunks from
cache what causes to cannot allocate chunk? Can you point me to any
documention?
On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr
w
Chunks are part of sstables. When there is enough space in memory to cache
them, read performance will increase if application requests it again.
Your real answer is application dependent. For example write heavy
applications are different than read heavy or read-write heavy. Real time
application
te: Wednesday, March 6, 2019 at 22:19
To: "user@cassandra.apache.org"
Subject: Re: Maximum memory usage reached
Also, that particular logger is for the internal chunk / page cache. If it
can’t allocate from within that pool, it’ll just use a normal bytebuffer.
It’s not really a
Also, that particular logger is for the internal chunk / page cache. If it
can’t allocate from within that pool, it’ll just use a normal bytebuffer.
It’s not really a problem, but if you see performance suffer, upgrade to latest
3.11.4, there was a bit of a perf improvement in the case where th
That’s not an error. To the left of the log message is the severity, level
INFO.
Generally, I don’t recommend running Cassandra on only 2GB ram or for small
datasets that can easily fit in memory. Is there a reason why you’re
picking Cassandra for this dataset?
On Thu, Mar 7, 2019 at 8:04 AM Kyry
Can we the see “nodetool tablestats” for the biggest table as well.
From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
Sent: Sunday, February 10, 2019 7:21 AM
To: user@cassandra.apache.org
Subject: RE: Maximum memory usage
Okay, that’s at the moment it was calculated. Still need
Okay, that’s at the moment it was calculated. Still need to see histograms.
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Sunday, February 10, 2019 7:09 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage
Thanks Kenneth,
110mb is the biggest partition in
enneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Sunday, February 10, 2019 6:43 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Maximum memory usage
>
>
>
> ```Percentile SSTables Write Latency Read LatencyP
One of the other db with 100mb partition* out of memory happens very
frequently.
```Percentile SSTables Write Latency Read LatencyPartition
SizeCell Count
(micros) (micros) (bytes)
50% 0.00 0.00
Rahul,
Those partitions are tiny. Could you give us the table histograms for the
biggest tables.
Thanks,
Kenneth Brotman
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Sunday, February 10, 2019 6:43 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage
No not running any nodetool commands. It happens 2 to 3 times a day
On Thu, Feb 7, 2019, 2:29 AM dinesh.jo...@yahoo.com.INVALID
wrote:
> Are you running any nodetool commands during that period? IIRC, this is a
> log entry emitted by the BufferPool. It may be harm unless it's happening
> very of
```Percentile SSTables Write Latency Read LatencyPartition
SizeCell Count
(micros) (micros) (bytes)
50% 1.00 24.60219.34 258
4
75% 1.00 24
Are you running any nodetool commands during that period? IIRC, this is a log
entry emitted by the BufferPool. It may be harm unless it's happening very
often or logging a OOM.
Dinesh
On Wednesday, February 6, 2019, 6:19:42 AM PST, Rahul Reddy
wrote:
Hello,
I see maximum memory usage
Can you give us the “nodetool tablehistograms”
Kenneth Brotman
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Wednesday, February 06, 2019 6:19 AM
To: user@cassandra.apache.org
Subject: Maximum memory usage
Hello,
I see maximum memory usage alerts in my system.log couple
You may have better luck switching to G1GC and using a much larger
heap (16 to 30GB). 4GB is likely too small for your amount of data,
especially if you have a lot of sstables. Then try increasing
file_cache_size_in_mb further.
Cheers,
Mark
On Tue, Mar 28, 2017 at 3:01 AM, Mokkapati, Bhargav (Nok
25 matches
Mail list logo