No progress in compactionstats

2019-12-01 Thread Dipan Shah
Hello, I am running a 5 node Cassandra cluster on V 3.7 and did not understand why the following thing was happening. I had altered the compaction strategy of a table from Size to Leveled and while running "nodetool compactionstats" found that the SSTables were stuck and not getting compacted.

Re: Optimal backup strategy

2019-12-01 Thread Hossein Ghiyasi Mehr
1. It's recommended to use commit log after one node failure. Cassandra has many options such as replication factor as substitute solution. 2. Yes, right. *VafaTech.com - A Total Solution for Data Gathering & Analysis* On Fri, Nov 29, 2019 at 9:33 AM Adarsh Kumar wrote: > Thanks Ahu and Hussei

Re: Optimal backup strategy

2019-12-01 Thread Hossein Ghiyasi Mehr
If you need backup for this environment, you should use snapshot and incremental backups. commit log backup solution depends on your environment and application. For example you can use RAID 1 on commit log disk to be safe against hardware failure. *VafaTech.com - A Total Solution for Data Gatheri

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-01 Thread Hossein Ghiyasi Mehr
Chunks are part of sstables. When there is enough space in memory to cache them, read performance will increase if application requests it again. Your real answer is application dependent. For example write heavy applications are different than read heavy or read-write heavy. Real time application

"Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-01 Thread Rahul Reddy
Hello, We are seeing memory usage reached 512 mb and cannot allocate 1MB. I see this because file_cache_size_mb by default set to 512MB. Datastax document recommends to increase the file_cache_size. We have 32G over all memory allocated 16G to Cassandra. What is the recommended value in my case

Migration a Keyspace from 3.0.X to 3.11.2 Cluster which already have keyspaces

2019-12-01 Thread slmnjobs -
Hi everyone, I have a question about migrate a keyspace another cluster. The main problem for us, our new cluster already have 2 keyspaces and we using it in production. Because of we don't sure how token ranges will be change, we would like the share our migration plan here and take back your