You're right Christopher, I missed the fact that with RF=3 NTS will always
place a replica on us-east-1d, so in this case repair on this node would be
sufficient. Thanks for clarifying!
2016-09-05 11:28 GMT-03:00 Christopher Bradford :
> If each AZ has a different rack identifier and the keyspace
If each AZ has a different rack identifier and the keyspace uses
NetworkTopologyStrategy with a replication factor of 3 then the single host
in us-east-1d *will receive 100% of the data*. This is due
to NetworkTopologyStrategy's preference for placing replicas across
different racks before placing
If I understand the way replication is done, the node in us-east-1d has
all the (data) replicas, right?
No, for this to be correct, you'd need to have one DC per AZ, which is not
this case since you have a single DC encompassing multiple AZs. Right now,
replicas will be spread in 3 distinct AZs,
Thanks for the info, Paulo.
My cluster is in AWS, the keyspace has replication factor 3 with
NetworkTopologyStrategy in one DC which have 5 nodes: 2 in us-east-1b, 2 in
us-east-1c and 1 in us-east-1d. If I understand the way replication is
done, the node in us-east-1d has all the (data) replicas,
https://issues.apache.org/jira/browse/CASSANDRA-7450
2016-09-01 13:11 GMT-03:00 Li, Guangxing :
> Hi,
>
> I have a cluster running 2.0.9 with 2 data centers. I noticed that
> 'nodetool repair -pr keyspace cf' runs very slow (OpsCenter shows that the
> node's data size is 39 GB and the largest SST
Hi,
I have a cluster running 2.0.9 with 2 data centers. I noticed that
'nodetool repair -pr keyspace cf' runs very slow (OpsCenter shows that the
node's data size is 39 GB and the largest SSTable size is like 7 GB so the
column family is not huge, SizeTieredCompactionStrategy is used). Repairing
a