Hello Owen, Seems you did not configure token for all nodes correctly. See the section Calculating Tokens for multiple data centers here http://www.datastax.com/docs/0.8/install/cluster_init
Best regards Shamim --- On Mon, Dec 3, 2012 at 4:42 PM, Owen Davies <cassan...@obduk.com> wrote: We have a 2 data center test cassandra setup running, and are writing to it using LOCAL_QUORUM. When reading, sometimes the data is there, sometimes it is not, which we think is a replication issue, even though we have left it plenty of time after the writes. We have the following setup: cassandra -v: 1.1.6 cassandra.yaml ----------------------- cluster_name: something endpoint_snitch: PropertyFileSnitch cassandra-topology.properties -------------------------------------------- 192.168.1.1=dc1:rack1 192.168.1.2=dc1:rack1 192.168.1.3=dc1:rack1 192.168.2.1=dc2:rack1 192.168.2.2=dc2:rack1 192.168.2.3=dc3:rack1 default=nodc:norack cassandra-cli -------------------- Keyspace: example: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [dc1:3, dc2:3] nodetool ring ------------------- Address DC Rack Status State Load Effective-Ownership Token 159447687142037741049740936276011715300 192.168.1.2 dc1 rack1 Up Normal 111.17 GB 100.00% 67165620003619490909052924699950283577 192.168.1.1 dc1 rack1 Up Normal 204.57 GB 100.00% 71045951808949151217931264995073558408 192.168.2.1 dc2 rack1 Up Normal 209.92 GB 100.00% 107165019770579893816561717940612111506 192.168.1.3 dc1 rack1 Up Normal 209.92 GB 100.00% 114165363957966360026729000065495595953 192.168.2.3 dc2 rack1 Up Normal 198.22 GB 100.00% 147717787092318068320268200174271353451 192.168.2.2 dc2 rack1 Up Normal 179.31 GB 100.00% 159447687142037741049740936276011715300 Does anyone have any ideas why every server does not have the same amount of data on? Thanks, Owen