I am running repairs now.  I checked CF stats and they all appear to have
very similar max and average row sizes between the lowest loaded node and
the highest.  One thing I did notice is that nodetool netstats shows files
for a column family I dropped named payload and they never appear to go
away:

fbx: /cassandra/data/raw/fbx/payload/fbx-payload-hf-274-Data.db sections=1
progress=0/1407643274 - 0%
   fbx: /cassandra/data/raw/fbx/payload/fbx-payload-hf-189-Data.db
sections=1 progress=0/1670618463 - 0%
   fbx: /cassandra/data/raw/fbx/payload/fbx-payload-hf-61-Data.db
sections=1 progress=0/639618002 - 0%
   fbx: /cassandra/data/raw/fbx/payload/fbx-payload-hf-358-Data.db
sections=1 progress=0/72483168 - 0%
   fbx: /cassandra/data/raw/fbx/payload/fbx-payload-hf-349-Data.db
sections=1 progress=0/383034719 - 0%


Could that be causing issues?

Thanks!


On 12/11/12 3:10 PM, "aaron morton" <aa...@thelastpickle.com> wrote:

>I would check the logs for Dropped Message alerts, and run repair if you
>have not. 
>
>I would also look at the nodetool CF stats on each node to check the row
>size. It may be the case that you have some very wide rows stored on
>nodes 10.56.92.196, 10.28.91.8, 10.56.92.198 and 10.28.91.2
>
>Hope that helps. 
>
>
>-----------------
>Aaron Morton
>Freelance Cassandra Developer
>New Zealand
>
>@aaronmorton
>http://www.thelastpickle.com
>
>On 11/12/2012, at 4:29 PM, Keith Wright <kwri...@nanigans.com> wrote:
>
>> Hello,
>> 
>> I have base Cassandra 1.1.7 installed in two data centers with 3 nodes
>>each using a PropertyFileSnitch as outlined below. When I run a nodetool
>>ring, I see a very uneven load. Any idea what I could be going on? I
>>have not added/removed any nodes or changed the replication scheme or
>>counts.
>> 
>> Thanks!
>> 
>> Address DC Rack Status State Load Effective-Ownership Token
>> 113427455640312821154458202477256070485
>> 10.56.92.194 WDC RAC1 Up Normal 53.65 GB 66.67% 0
>> 10.28.91.10 SEA RC1 Up Normal 3.96 GB 66.67% 1
>> 10.56.92.196 WDC RAC1 Up Normal 673.78 MB 66.67%
>>56713727820156410577229101238628035242
>> 10.28.91.8 SEA RC1 Up Normal 670 MB 66.67%
>>56713727820156410577229101238628035243
>> 10.56.92.198 WDC RAC1 Up Normal 746.25 MB 66.67%
>>113427455640312821154458202477256070484
>> 10.28.91.2 SEA RC1 Up Normal 799.51 MB 66.67%
>>113427455640312821154458202477256070485
>> 
>> Replication Strategy:
>>org.apache.cassandra.locator.NetworkTopologyStrategy
>> Durable Writes: true
>> Options: [WDC:2, SEA:2]
>> 
>> Cluster Information:
>> Snitch: org.apache.cassandra.locator.PropertyFileSnitch
>> Partitioner: org.apache.cassandra.dht.RandomPartitioner
>> 
>> ##### WDC
>> 10.56.92.194=WDC:RAC1
>> 10.56.92.196=WDC:RAC1
>> 10.56.92.198=WDC:RAC1
>> 
>> #### SEATTLE
>> 10.28.91.10=SEA:RC1
>> 10.28.91.8=SEA:RC1
>> 10.28.91.2=SEA:RC1
>> 
>> # default for unknown nodes
>> default=DALLAS:RAC1
>> 
>> 
>

Reply via email to