What snitch do you have configured? We typically see a proper spread of data 
across all our nodes equally.

Anthony


On 17/09/2011, at 10:06 AM, Chris Marino wrote:

> Hi, I have a question about what to expect when running a cluster across 
> datacenters with Local Quorum consistency.
> 
> My simplistic assumption is that the performance of an 8 node cluster split 
> across 2 data centers and running with local quorum would perform roughly the 
> same as a 4 node cluster in one data center.
> 
> I'm 95% certain we've set up the keyspace so that the entire range is in one 
> datacenter and the client is local. I see the keyspace split across all the 
> local nodes, with remote nodes owning 0%. Yet when I run the stress tests 
> against this configuration with local quorum, I see dramatically different 
> results from when I ran the same tests against a 4 node cluster.  I'm still 
> 5% unsure of this because the documentation on how to configure this is 
> pretty thin.
> 
> My understanding of Local Quorum was that once the data was written to a 
> local quorum, the commit would complete. I also believed that this would 
> eliminate any WAN latency required for replication to the other DC.
> 
> It not just that the split cluster runs slower, its also that there is 
> enormous variability in identical tests. Sometimes by a factor of 2 or more. 
> It seems as though the WAN latency is not only impacting performance, but 
> that it's also introducing a wide variation on overally performance.
> 
> Should WAN latency be completely hidden with local quorum? Or are there 
> second order issues involved that will impact performance??
> 
> I'm running in EC2 across us-east/west regions. I already know how 
> unpredictable EC2 performance can be, but what I'm seeing with here is far 
> beyond normal.performance variability for EC2
> 
> Is there something obvious that I'm missing that would explain why the 
> results are so different?? 
> 
> Here's the config when we run a 2x2 cluster:
> 
> Address         DC          Rack        Status State   Load            Owns   
>  Token                                       
>                                                                               
>  85070591730234615865843651857942052865      
> 192.168.2.1     us-east     1b          Up     Normal  25.26 MB        50.00% 
>  0                                           
> 192.168.2.6     us-west     1c          Up     Normal  12.68 MB        0.00%  
>  1                                           
> 192.168.2.2     us-east     1b          Up     Normal  12.56 MB        50.00% 
>  85070591730234615865843651857942052864      
> 192.168.2.7     us-west     1c          Up     Normal  25.48 MB        0.00%  
>  85070591730234615865843651857942052865      
> 
> Thanks in advance.
> CM

Reply via email to