See the bug report, implementations of IPartitioner.describeOwnership()

Cheers

-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 18/08/2011, at 2:58 AM, Oleg Tsvinev wrote:

> Aaron,
> 
> Can you point to a line in Cassandra sources where you believe it does
> not understand the "multi ring" approach?
> I'm not sure about Cassandra team but Hector team likes pull requests
> with patches.
> Anyways, I believe I should run a test to see if data is indeed
> replicated between datacenters.
> 
> And I voted on the issue.
> 
> On Wed, Aug 17, 2011 at 2:20 AM, aaron morton <aa...@thelastpickle.com> wrote:
>> 
>> The calculation for ownership does not understand the "multi ring" approach 
>> to assigning tokens. I've created 
>> https://issues.apache.org/jira/browse/CASSANDRA-3047 for you.
>> Otherwise your tokens look good to me.
>> Cheers
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>> On 17/08/2011, at 9:19 AM, Oleg Tsvinev wrote:
>> 
>> Hi all,
>> 
>> I followed instructions here:
>> http://wiki.apache.org/cassandra/Operations#Token_selection
>> to create a Cassandra cluster spanning two datacenters. Now I see that
>> nodes belonging to DC2 datacenter own 0% of the ring. I would expect
>> them to own 50%.
>> 
>> Does anyone have an idea what's going on here?
>> 
>> root@casper02:~# nodetool -h 10.4.64.63 ring
>> Address         DC          Rack        Status State   Load
>> Owns    Token
>> 
>>        85070591730234615865843651857942052865
>> 10.4.64.63      DC1         RAC1        Up     Normal  36.19 MB
>> 50.00%  0
>> 10.4.65.55      DC2         RAC1        Up     Normal  36.21 MB
>> 0.00%   1
>> 10.4.65.73      DC1         RAC1        Up     Normal  530.12 KB
>> 50.00%  85070591730234615865843651857942052864
>> 10.4.64.166     DC2         RAC1        Up     Normal  525.68 KB
>> 0.00%   85070591730234615865843651857942052865
>> 
>> Thank you,
>>  Oleg
>> 
>> -------------
>> 
>> On Mon, Aug 15, 2011 at 1:39 PM, Oleg Tsvinev <oleg.tsvi...@gmail.com> wrote:
>> 
>> Hi all,
>> 
>> I have a question that documentation has not clear answer for. I have
>> 
>> the following requirements:
>> 
>> 1. Synchronously store data in datacenter DC1 on 2+ nodes
>> 
>> 2. Asynchronously replicate the same data to DC2 and store it on 2+
>> 
>> nodes to act as a hot standby
>> 
>> Now, I have configured keyspaces with o.a.c.l.NetworkTopologyStrategy
>> 
>> with strategy_options=[{DC1:2, DC2:2}] and use LOCAL_QUORUM
>> 
>> consistency level, following documentation here:
>> 
>> http://www.datastax.com/docs/0.8/operations/datacenter
>> 
>> Now, how do I assign initial tokens? If I have, say 6 nodes total, 3
>> 
>> in DC1 and 3 in DC2, and create a ring as if all 6 nodes share the
>> 
>> total 2^128 space equally.
>> 
>> Now say node N1:DC2 has key K and is in remote datacenter (for an app
>> 
>> in DC1). Wouldn't Cassandra always forward K to the DC2 node N1 thus
>> 
>> turning asynchronous writes into synchronous ones? Performance impact
>> 
>> will be huge as the latency between DC1 and DC2 is significant.
>> 
>> I hope there's an answer and I'm just missing something. My case falls
>> 
>> under Disaster Recovery in
>> 
>> http://www.datastax.com/docs/0.8/operations/datacenter but I don't see
>> 
>> how Cassandra will support my use case.
>> 
>> I appreciate any help on this.
>> 
>> Thank you,
>> 
>>  Oleg
>> 
>> 

Reply via email to