I don't think that's correct for a mult-dc ring, but you'll want to hear a
final answer from someone more authoritative. I could easily be wrong. Try
using the built in token generating tool (token-generator) - I don't seem to
have it on my hosts (1.1.6 also) so I can't confirm. I used the tokentoolv2.py
tool (from here http://www.datastax.com/docs/1.0/initialize/token_generation)
and got the following (which looks to me evenly spaced and not using offsets):
tstafford@tycen-linux:Cassandra$ ./tokentoolv2.py 3 3
{
"0": {
"0": 0,
"1": 56713727820156410577229101238628035242,
"2": 113427455640312821154458202477256070485
},
"1": {
"0": 28356863910078205288614550619314017621,
"1": 85070591730234615865843651857942052863,
"2": 141784319550391026443072753096570088106
}
}
-Tycen
From: Dwight Smith [mailto:[email protected]]
Sent: Wednesday, March 20, 2013 11:37 AM
To: Dwight Smith; '[email protected]'
Subject: RE: Question regarding multi datacenter and LOCAL_QUORUM
Hmm - the ring output follows, the tokens in AZ2 are offset by 100:
Address DC Rack Status State Load
Effective-Ownership Token
113427455640312821154458202477256070585
xx.yy.zz.143 AZ1 RAC1 Up Normal 626.21 KB 100.00%
0
xx.yy.zz.145 AZ1 RAC1 Up Normal 622.73 KB 100.00%
56713727820156410577229101238628035242
xx.yy.zz.146 AZ1 RAC1 Up Normal 622.49 KB 100.00%
113427455640312821154458202477256070485
xx.yy.zz.147 AZ2 RAC2 Up Normal 550.31 KB 100.00%
100
xx.yy.zz.148 AZ2 RAC2 Up Normal 622.05 KB 100.00%
56713727820156410577229101238628035342
xx.yy.zz.149 AZ2 RAC2 Up Normal 483.18 KB 100.00%
113427455640312821154458202477256070585
From: Dwight Smith
Sent: Wednesday, March 20, 2013 11:29 AM
To: '[email protected]'
Subject: RE: Question regarding multi datacenter and LOCAL_QUORUM
Actually the tokens in AZ2 are not correct.
I'll get those corrected - thanks for the pointer.
From: Tycen Stafford [mailto:[email protected]]
Sent: Wednesday, March 20, 2013 11:25 AM
To: [email protected]<mailto:[email protected]>
Subject: RE: Question regarding multi datacenter and LOCAL_QUORUM
Okay - that looks alternated to me. I'm assuming that 147, 148 and 149 are
this then:
28356863910078205288614550619314017621
85070591730234615865843651857942052864
141784319550391026443072753096570088106
I'm out of ideas - sorry I couldn't help more.
-Tycen
From: Dwight Smith [mailto:[email protected]]
Sent: Wednesday, March 20, 2013 11:10 AM
To: '[email protected]'
Subject: RE: Question regarding multi datacenter and LOCAL_QUORUM
>From the yamls
.143
initial_token: 0
.145
initial_token: 56713727820156410577229101238628035242
.146
initial_token: 113427455640312821154458202477256070485
From: Tycen Stafford [mailto:[email protected]]
Sent: Wednesday, March 20, 2013 10:43 AM
To: [email protected]<mailto:[email protected]>
Subject: RE: Question regarding multi datacenter and LOCAL_QUORUM
Did you alternate your tokens? I may be off base - but if not then that's why
you might be seeing cross-dc request.
-Tycen
From: Dwight Smith [mailto:[email protected]]
Sent: Wednesday, March 20, 2013 10:30 AM
To: [email protected]<mailto:[email protected]>
Subject: Question regarding multi datacenter and LOCAL_QUORUM
Hi
I have 2 data centers - with 3 nodes in each DC - version 1.1.6 - replication
factor 2 - topology properties:
# Cassandra Node IP=Data Center:Rack
xx.yy.zz.143=AZ1:RAC1
xx.yy.zz.145=AZ1:RAC1
xx.yy.zz.146=AZ1:RAC1
xx.yy.zz.147=AZ2:RAC2
xx.yy.zz.148=AZ2:RAC2
xx.yy.zz.149=AZ2:RAC2
Using LOCAL_QUORUM, my understanding was that reads/writes would process
locally ( for the coordinator ) and send requests to the remaining nodes in the
DC, but in the system log for 146 I observe that this is not the case, extract
from the log:
DEBUG [Thrift:1] 2013-03-19 00:00:53,312 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,313 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143,/xx.yy.zz.145
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143
DEBUG [Thrift:1] 2013-03-19 00:00:53,366 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,367 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143,/xx.yy.zz.145
DEBUG [Thrift:1] 2013-03-19 00:00:53,391 CassandraServer.java (line 589)
batch_mutate
DEBUG [Thrift:1] 2013-03-19 00:00:53,418 CassandraServer.java (line 589)
batch_mutate
DEBUG [Thrift:1] 2013-03-19 00:00:53,429 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,429 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.145
DEBUG [Thrift:1] 2013-03-19 00:00:53,441 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,441 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143
The batch mutates are as expected - locally, two replicas, and hints to DC AZ2,
but why the unexpected behavior for the get_slice requests. This is observed
throughout the log.
Thanks much