The load contains duplicate data which is created due to compaction.
Run 'cleanup' command with nodetool to those big nodes and you should
see the load drop to the actual usage.

 - Garo

On Thu, Nov 4, 2010 at 11:08 AM, Mark Zitnik <mark.zit...@gmail.com> wrote:
> Hi All,
>
> I'm having a problem in spreading data across the cluster.
>
> my replication factor is 3, please advice why there is a big difference
> between 10.11.40.239 and 10.11.40.161.
>
> Thanks
>
> Address       Status     Load
> Range                                      Ring
>
> 10.11.40.173  Up         220.58 MB
> 5488375922431206513329053672302981681      |<--|
> 10.11.40.248  Up         79.03 MB
> 21834979508285328193838231992126539188     |   ^
> 10.11.40.157  Up         79.48 MB
> 26827287656215870496305624629770425714     v   |
> 10.11.40.159  Up         70.28 MB
> 47864683020405054074376652399555662273     |   ^
> 10.11.40.160  Up         71.45 MB
> 52127237413510596835396555869135415868     v   |
> 10.11.40.171  Up         70.59 MB
> 58686864619218705531024294466626882735     |   ^
> 10.11.40.241  Up         126.81 MB
> 97920742222079324486086191292998090436     v   |
> 10.11.40.239  Up         434.14 MB
> 143147210934849238702354354634298482182    |   ^
> 10.11.40.161  Up         288.68 MB
> 155148009923170935299284525080785995290    |-->|
>
>

Reply via email to