I would check the logs for Dropped Message alerts, and run repair if you have 
not. 

I would also look at the nodetool CF stats on each node to check the row size. 
It may be the case that you have some very wide rows stored on nodes 
10.56.92.196, 10.28.91.8, 10.56.92.198 and 10.28.91.2 

Hope that helps. 


-----------------
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 11/12/2012, at 4:29 PM, Keith Wright <kwri...@nanigans.com> wrote:

> Hello,
> 
> I have base Cassandra 1.1.7 installed in two data centers with 3 nodes each 
> using a PropertyFileSnitch as outlined below. When I run a nodetool ring, I 
> see a very uneven load. Any idea what I could be going on? I have not 
> added/removed any nodes or changed the replication scheme or counts.
> 
> Thanks!
> 
> Address DC Rack Status State Load Effective-Ownership Token
> 113427455640312821154458202477256070485
> 10.56.92.194 WDC RAC1 Up Normal 53.65 GB 66.67% 0
> 10.28.91.10 SEA RC1 Up Normal 3.96 GB 66.67% 1
> 10.56.92.196 WDC RAC1 Up Normal 673.78 MB 66.67% 
> 56713727820156410577229101238628035242
> 10.28.91.8 SEA RC1 Up Normal 670 MB 66.67% 
> 56713727820156410577229101238628035243
> 10.56.92.198 WDC RAC1 Up Normal 746.25 MB 66.67% 
> 113427455640312821154458202477256070484
> 10.28.91.2 SEA RC1 Up Normal 799.51 MB 66.67% 
> 113427455640312821154458202477256070485
> 
> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
> Durable Writes: true
> Options: [WDC:2, SEA:2]
> 
> Cluster Information:
> Snitch: org.apache.cassandra.locator.PropertyFileSnitch
> Partitioner: org.apache.cassandra.dht.RandomPartitioner
> 
> ##### WDC
> 10.56.92.194=WDC:RAC1
> 10.56.92.196=WDC:RAC1
> 10.56.92.198=WDC:RAC1
> 
> #### SEATTLE
> 10.28.91.10=SEA:RC1
> 10.28.91.8=SEA:RC1
> 10.28.91.2=SEA:RC1
> 
> # default for unknown nodes
> default=DALLAS:RAC1
> 
> 

Reply via email to