I want to add a couple of questions regrading incremental backups:
1. If I already have a Cassandra cluster running, would changing the i
ncremental_backups parameter in the cassandra.yaml of each node, and then
restart it do the trick?
2. Assuming I am creating a daily snapshot, what is the gain
You should run repair. If the disk space is the problem, try to cleanup and
major compact before repair.
You can limit the streaming data by running repair for each column family
separately.
maki
On 2012/04/28, at 23:47, Raj N wrote:
> I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 G
Apparently IntegerType is based on Java's BigInteger.
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=src/java/org/apache/cassandra/db/marshal/IntegerType.java;hb=HEAD
Given the message, I suspect that you got some values between -2^15 and
2^15-1 (the range of a short int) t
I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled repair by restarting the nodes. But
unfortu
Hi
Currently I am taking daily snapshot on my keyspace in production and
already enable the incremental backups as well.
According to the documentation, the incremental backup option will create an
hard-link to the backup folder when new sstable is flushed. Snapshot will
copy all the data/index/e