Hi,

I have a few questions and was looking for an answer.
I have a cluster of 7 Cassandra 0.6.5 nodes in my test setup. RF=2. Original
data size is about 100 gigs, with RF=2, i see the total load on the cluster
is about 200 gigs, all good.

1.  I was looking to increase the RF to 3. This process entails changing the
config and calling repair on the keyspace one at a time, right?
So, I started with one node at a time, changed the config file on the first
node for the keyspace, restarted the node. And then called a nodetool repair
on the node.   These same steps i followed for every node after that, as I
read somewhere that the repair should be invoked one node at a time.
(a) What is the best way to ascertain if the repair is completed on a node?
(b) After the repair was finished, I was expecting the total data load to be
300 gigs. However, calling the ring command, shows the total load to be 370
gigs. I double checked and config on all machines says RF=3. I am calling a
cleanup on each node right now. Is the cleanup required after calling a
repair? Am i missing something?


2. This question is regarding multi-datacenter support. I plan to have a
cluster of 6 machines across 2 datacenters, with the machines from the
datacenters alternating on the ring. RF=3 is the plan. I already have a test
setup as described above, which has most of the data, but its still
configured on the default RackUnAware strategy. I was hoping to find the
right steps to move it to RackAware strategy with the
PropertyFileEndpointSnitch that I read somewhere (not sure if thats
supported in 0.6.5, but CustomEndPointSnitch is the same, right?), all this
without having to repopulate any data again.
Currently there is only 1 datacenter, but I was stil planning to set the
cluster up as it would be in multi-datacenter support, and run it like that
in the one datacenter, and when the second datacenter comes up, just copy
all the files across to the new nodes in the second datacenter, and bring
the whole cluster up.  Will this work ? I have tried copying files to a new
node, shutting down all nodes, and bringing back everything up, and it
recognized the new ips.


Thanks
Gurpreet

Reply via email to