Re: Optimal backup strategy

2019-12-02 Thread Adarsh Kumar
Thanks Hossein, Just one more question is there any special SOP or consideration we have to take for multi-site backup. Please share any helpful link, blog or steps documented. Regards, Adarsh Kumar On Sun, Dec 1, 2019 at 10:40 PM Hossein Ghiyasi Mehr wrote: > 1. It's recommended to use commi

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Jeff Jirsa
This would be true except that the pretty print for the log message is done before the logging rate limiter is applied, so if you see MiB instead of a raw byte count, you're PROBABLY spending a ton of time in string formatting within the read path. This is fixed in 3.11.3 ( https://issues.apache.o

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Reid Pinchback
Rahul, if my memory of this is correct, that particular logging message is noisy, the cache is pretty much always used to its limit (and why not, it’s a cache, no point in using less than you have). No matter what value you set, you’ll just change the “reached (….)” part of it. I think what wo

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Hossein Ghiyasi Mehr
It may be helpful: https://thelastpickle.com/blog/2018/08/08/compression_performance.html It's complex. Simple explanation, cassandra keeps sstables in memory based on chunk size and sstable parts. It manage loading new sstables to memory based on requests on different sstables correctly . You shou

RE: [EXTERNAL] Migration a Keyspace from 3.0.X to 3.11.2 Cluster which already have keyspaces

2019-12-02 Thread Durity, Sean R
The size of the data matters here. Copy to/from is ok if the data is a few million rows per table, but not billions. It is also relatively slow (but with small data or a decent outage window, it could be fine). If the data is large and the outage time matters, you may need custom code to read fr

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Rajsekhar Mallick
Hello Rahul, I would request Hossein to correct me if I am wrong. Below is how it works How will a application/database read something from the disk A request comes in for read> the application code internally would be invoking upon system calls-> these kernel level system calls will sche

RE: [EXTERNAL] Re: Upgrade strategy for high number of nodes

2019-12-02 Thread Durity, Sean R
All my upgrades are without downtime for the application. Yes, do the binary upgrade one node at a time. Then run upgradesstables on as many nodes as your app load can handle (maybe you can point the app to a different DC, while another DC is doing upgradesstables). Upgradesstables doesn’t cause

RE: [EXTERNAL] performance

2019-12-02 Thread Durity, Sean R
I’m not sure this is the fully correct question to ask. The size of the data will matter. The importance of high availability matters. Performance can be tuned by taking advantage of Cassandra’s design strengths. In general, you should not be doing queries with a where clause on non-key columns.

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Rahul Reddy
Thanks Hossein, How does the chunks are moved out of memory (LRU?) if it want to make room for new requests to get chunks?if it has mechanism to clear chunks from cache what causes to cannot allocate chunk? Can you point me to any documention? On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr w

Re: Uneven token distribution with allocate_tokens_for_keyspace

2019-12-02 Thread Enrico Cavallin
Hi Anthony, thank you for your hints, now the new DC is well balanced within 2%. I did read your article, but I thought it was needed only for new "clusters", not also for new "DCs"; but RF is per DC so it makes sense. You TLP guys are doing a great job for Cassandra community. Thank you, Enrico