If you're using the Java driver with LOCAL_ONE and the default load
balancing strategy (TokenAware wrapped on DCAwareRoundRobin), the
driver will always select the primary replica. To change this behavior and
introduce some randomness so that non primary replicas get a chance to
serve a read:

new TokenAwarePolicy(new DCAwareRoundRobinPolicy("local_DC"), true).

The second parameter (true) asks the TokenAware policy to "shuffle" replica
on each request to avoid always returning the primary replica.

On Wed, Dec 2, 2015 at 6:44 PM, Walsh, Stephen <stephen.wa...@aspect.com>
wrote:

> Very good questions.
>
>
>
> We have reads and writes at LOCAL_ONE.
>
> There are 2 application (1 for each DC) who read and write at the same
> rate to their local DC
>
> (All reads / writes started all perfectly even and degraded over time)
>
>
>
> We use DCAwareRoundRobin policy
>
>
>
> On update on the nodetool cleanup – it has help but hasn’t balanced all
> nodes. Node 1 on DC2 is still quite high
>
>
>
> Node 1 (DC1)  =  1.35k    (seeder)
>
> Node 2 (DC1)  =  1.54k
>
> Node 3 (DC1)  =  1.45k
>
>
>
> Node 1 (DC2)  =  2.06k   (seeder)
>
> Node 2 (DC2)  =  1.38k
>
> Node 3 (DC2)  =  1.43k
>
>
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* 02 December 2015 14:22
> *To:* user@cassandra.apache.org
> *Subject:* Re: cassandra reads are unbalanced
>
>
>
> Which Consistency level do you use for reads ? ONE ? Are you reading from
> only DC1 or from both DC ?
>
> What is the LoadBalancingStrategy you have configured for your driver ?
> TokenAware wrapped on DCAwareRoundRobin ?
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Dec 2, 2015 at 3:36 PM, Walsh, Stephen <stephen.wa...@aspect.com>
> wrote:
>
> Hey all,
>
>
>
> Thanks for taking the time to help.
>
>
>
> So we have 6 cassandra nodes in 2 Data Centers.
>
> Both Data Centers have a replication of 3 – so all nodes have all the data.
>
>
>
> Over the last 2 days we’ve noticed that data reads / writes has shifted
> from balanced to unbalanced
>
> (Nodetool status still shows 100% ownership on every node, with similar
> sizes)
>
>
>
>
>
> For Example
>
>
>
> We monitor the number of reads / writes of every table via the cassandra
> JMX metrics. (cassandra.db.read_count)
>
> Over the last hour of this run
>
>
>
> Reads
>
> Node 1 (DC1)  =  1.79k    (seeder)
>
> Node 2 (DC1)  =  1.92k
>
> Node 3 (DC1)  =  1.97k
>
>
>
> Node 1 (DC2)  =  2.90k   (seeder)
>
> Node 2 (DC2)  =  1.76k
>
> Node 3 (DC2)  =  1.19k
>
>
>
> As you see on DC1, everything is pretty well balanced, but on DC2 the
> reads favour Node1 over Node 3.
>
> I ran a nodetool repair yesterday – ran for 6 hours and when completed
> didn’t change the read balance.
>
>
>
> Write levels are similar on  DC2, but not as bad a reads.
>
>
>
> Anyone any suggestion on how to rebalance? I’m thinking maybe running a
> nodetool cleanup in case some of the keys have shifted?
>
>
>
> Regards
>
> Stephen Walsh
>
>
>
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>

Reply via email to