I always explicitly specify the local DC in my apps for the DC aware round 
robin policy in a multi DC deployment (localDC arg on the constructor)

http://docs.datastax.com/en/drivers/nodejs/3.0/module-policies_loadBalancing-DCAwareRoundRobinPolicy.html

Maybe just do that if you want to specify it.


> On 29 Mar 2016, at 23:04, Eric Stevens <migh...@gmail.com> wrote:
> 
> How this works is probably documented in greater detail at the link I 
> provided before than I can do justice to here. 
> 
> TokenAware uses its configured child strategy to determine node locality.  
> DCAwareRoundRobin uses a configuration property, or if all of its seed nodes 
> are in the same DC it assumes nodes in that DC to be local.  LatencyAware 
> uses latency metrics to determine locality.
> 
> LOCAL_XXX consistency, as the name implies is considered satisfied if _XXX 
> replicas in the coordinator node's local datacenter have acknowledged the 
> write (or answered for the read).  If your load balancer considers nodes from 
> multiple datacenters local (i.e. it's shipping queries to nodes that belong 
> in several DC's), local consistency is considered only against the local 
> datacenter of the node which is coordinating the query - that is to say that 
> consistency is not a driver level property, but a coordinator level property 
> that is supplied by the driver.
> 
>> On Tue, Mar 29, 2016 at 8:01 AM X. F. Li <lixf...@gmail.com> wrote:
>> Thanks for the explanation. My question is
>> * How the client driver would determine which cassandra node is considered 
>> "local". Is it auto discovered (if so, how?) or manually specified somewhere?
>> * Whether local_xxx consistencies always fail when a partition is not 
>> replicated in the local DC, as specified in its replication strategy.
>> 
>>  Perhaps I should ask the node.js client authors about this.
>> 
>> 
>>> On Monday, March 28, 2016 07:47 PM, Eric Stevens wrote:
>>> > Local quorum works in the same data center as the coordinator node, but 
>>> > when an app server execute the write query, how is the coordinator node 
>>> > chosen?
>>> 
>>> It typically depends on the driver, and decent drivers offer you several 
>>> options for this, usually called load balancing strategy.  You indicate 
>>> that you're using the node.js driver (presumably the DataStax version), 
>>> which is documented here: 
>>> http://docs.datastax.com/en/developer/nodejs-driver/3.0/common/drivers/reference/tuningPolicies.html
>>> 
>>> I'm not familiar with the node.js driver, but I am familiar with the Java 
>>> driver, and since they use the same terminology RE load balancing, I'll 
>>> assume they work the same.
>>> 
>>> A typical way to set that up is to use TokenAware policy with 
>>> DCAwareRoundRobinPolicy as its child policy.  This will prefer to route 
>>> queries to the primary replica (or secondary replica if the primary is 
>>> offline) in the local datacenter for that query if it can be discovered 
>>> automatically by the driver, such as with prepared statements.  
>>> 
>>> Where the replica discovery can't be accomplished, TokenAware defers to the 
>>> child policy to choose the host.  In the case of DCAwareRoundRobinPolicy 
>>> that means it iterates through the hosts of the configured local datacenter 
>>> (defaulted to the DC of the seed nodes if they're all in the same DC) for 
>>> each subsequent execution.
>>> 
>>>> On Fri, Mar 25, 2016 at 2:04 PM X. F. Li <lixf...@gmail.com> wrote:
>>>> Hello,
>>>> 
>>>> Local quorum works in the same data center as the coordinator node, but
>>>> when an app server execute the write query, how is the coordinator node
>>>> chosen?
>>>> 
>>>> I use the node.js driver. How do the driver client determine which
>>>> cassandra nodes are in the same DC as the client node? Does it use
>>>> private network IP [192.168.x.x etc] to auto detect, or must I manually
>>>> provide a localBalancing policy by `new DCAwareRoundRobinPolicy(
>>>> localDcName )`?
>>>> 
>>>> If a partition is not available in the local DC, i.e. if the local
>>>> replica node fails or all replica nodes are in remote DC, will local
>>>> quorum fail? If it doesn't fail, there is no guarantee that it all
>>>> queries on a partition will be directed to the same data center, so does
>>>> it means strong consistency cannot be expected?
>>>> 
>>>> Another question:
>>>> 
>>>> Suppose I have replication factor 3. If one of the node fails, will
>>>> queries with ALL consistency fail if the queried partition is on the
>>>> failed node? Or would they continue to work with 2 replicas during the
>>>> time while cassandra is replicating the partitions on the failed node to
>>>> re-establish 3 replicas?
>>>> 
>>>> Thank you.
>>>> Regards,
>>>> 
>>>> X. F. Li

Reply via email to