Turns out that this is due to a larger proportion of the wide rows in the
system being located on that node. I moved its token over a little to
compensate for it, but it doesn't seem to have helped at this point.

What's confusing about this is that RF=3 and no other node's load is growing
as quickly as that one.

- James

On Tue, Jun 22, 2010 at 1:31 PM, James Golick <jamesgol...@gmail.com> wrote:

> RackUnaware, currently
>
>
> On Tue, Jun 22, 2010 at 1:26 PM, Robert Coli <rc...@digg.com> wrote:
>
>> On 6/22/10 10:07 AM, James Golick wrote:
>>
>>> This node's load is now growing at a ridiculous rate. It is at 105GB,
>>> with the next most loaded node at 70.63GB.
>>>
>>> Given that RF=3, I would assume that the replicas' nodes would grow
>>> relatively quickly too?
>>>
>> What Replica Placement Strategy are you using (Rackunaware, Rackaware,
>> etc?)? The current implementation of Rackaware is pretty simple and relies
>> on careful placement of nodes in multiple DCs along the ring to avoid
>> hotspots.
>>
>> http://wiki.apache.org/cassandra/Operations#Replication
>> "
>> RackAwareStrategy: replica 2 is placed in the first node along the ring
>> the belongs in another data center than the first; the remaining N-2
>> replicas, if any, are placed on the first nodes along the ring in the same
>> rack as the first
>>
>> Note that with RackAwareStrategy, succeeding nodes along the ring should
>> alternate data centers to avoid hot spots. For instance, if you have nodes
>> A, B, C, and D in increasing Token order, and instead of alternating you
>> place A and B in DC1, and C and D in DC2, then nodes C and A will have
>> disproportionately more data on them because they will be the replica
>> destination for every Token range in the other data center.
>> "
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-785
>>
>> Is also related, and marked Fix Version 0.8.
>>
>> =Rob
>>
>>
>

Reply via email to