I've got the same problem, and other people in the mailing list are
reporting the same issue.

I don't know what is happening here.

RF 2, 2 nodes :

10.59.21.241    eu-west     1b          Up     Normal  137.53 GB
50.00%              0
10.58.83.109    eu-west     1b          Up     Normal  102.46 GB
50.00%              85070591730234615865843651857942052864

I have no idea how to fix it.

Alain

2012/10/17 Ben Kaehne <ben.kae...@sirca.org.au>

> Nothing unusual.
>
> All servers are exactly the same. Nothing unusual in the log files. Is
> there any level of logging that I should be turning on?
>
> Regards,
>
>
> On Wed, Oct 17, 2012 at 9:51 AM, Andrey Ilinykh <ailin...@gmail.com>wrote:
>
>> With your environment (3 nodes, RF=3) it is very difficult to get
>> uneven load. Each node receives the same number of read/write
>> requests. Probably something is wrong on low level, OS or VM. Do you
>> see anything unusual in log files?
>>
>> Andrey
>>
>> On Tue, Oct 16, 2012 at 3:40 PM, Ben Kaehne <ben.kae...@sirca.org.au>
>> wrote:
>> > Not connecting to the same node every time. Using Hector to ensure an
>> even
>> > distribution of connections accross the cluster.
>> >
>> > Regards,
>> >
>> > On Sat, Oct 13, 2012 at 4:15 AM, B. Todd Burruss <bto...@gmail.com>
>> wrote:
>> >>
>> >> are you connecting to the same node every time?  if so, spread out
>> >> your connections across the ring
>> >>
>> >> On Fri, Oct 12, 2012 at 1:22 AM, Alexey Zotov <azo...@griddynamics.com
>> >
>> >> wrote:
>> >> > Hi Ben,
>> >> >
>> >> > I suggest you to compare amount of queries for each node. May be the
>> >> > problem
>> >> > is on the client side.
>> >> > Yoy can do that using JMX:
>> >> > "org.apache.cassandra.db:type=ColumnFamilies,keyspace=<YOUR
>> >> > KEYSPACE>,columnfamily=<YOUR CF>","ReadCount"
>> >> > "org.apache.cassandra.db:type=ColumnFamilies,keyspace=<YOUR
>> >> > KEYSPACE>,columnfamily=<YOUR CF>","WriteCount"
>> >> >
>> >> > Also I suggest to check output of "nodetool compactionstats".
>> >> >
>> >> > --
>> >> > Alexey
>> >> >
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > -Ben
>>
>
>
>
> --
> -Ben
>

Reply via email to