Hello,
I am running a 5 node Cassandra cluster on V 3.7 and did not understand why the
following thing was happening. I had altered the compaction strategy of a table
from Size to Leveled and while running "nodetool compactionstats" found that
the SSTables were stuck and not getting compacted.
1. It's recommended to use commit log after one node failure. Cassandra has
many options such as replication factor as substitute solution.
2. Yes, right.
*VafaTech.com - A Total Solution for Data Gathering & Analysis*
On Fri, Nov 29, 2019 at 9:33 AM Adarsh Kumar wrote:
> Thanks Ahu and Hussei
If you need backup for this environment, you should use snapshot and
incremental backups.
commit log backup solution depends on your environment and application.
For example you can use RAID 1 on commit log disk to be safe against
hardware failure.
*VafaTech.com - A Total Solution for Data Gatheri
Chunks are part of sstables. When there is enough space in memory to cache
them, read performance will increase if application requests it again.
Your real answer is application dependent. For example write heavy
applications are different than read heavy or read-write heavy. Real time
application
Hello,
We are seeing memory usage reached 512 mb and cannot allocate 1MB. I see
this because file_cache_size_mb by default set to 512MB.
Datastax document recommends to increase the file_cache_size.
We have 32G over all memory allocated 16G to Cassandra. What is the
recommended value in my case
Hi everyone,
I have a question about migrate a keyspace another cluster. The main problem
for us, our new cluster already have 2 keyspaces and we using it in production.
Because of we don't sure how token ranges will be change, we would like the
share our migration plan here and take back your