Hi Steve,

> As such, all keyspaces and tables where created on DC1.
> The effect of this is that all reads are now going to DC1 and ignoring DC2
>

I think this is not exactly true. When tables are created, they are created
on a specific keyspace, no matter where you send the alter schema command,
schema will propagate to all the datacenters the keyspace is replicated to.

So the question is: Is your keyspace using 'DC1: 3, DC2: 3' as replication
factors? Could you show us the schema and a nodetool status as well?

WE’ve tried doing , nodetool repair / cleanup – but the reads always go to
> DC1


Trying to do random things is often not a good idea. For example, as each
node holds 100% of the data, cleanup is an expensive no-op :-).

Anyone know how to rebalance the tokens over DC’s?


Yes, I can help on that, but I need to know your current status.

Basically, your(s) keyspace(s) must be using RF of 3 on the 2 DCs as
mentioned, your client to be configured to stick to the DC in their zone
(use a DCAware policy with the DC name + LOCAL_ONE/QUORUM, see Bhuvan's
links) and things should be better.

If you need more detailed help, let us know what is unclear to you and
provide us with 'nodetool status' output and with your schema (at least
keyspaces config).

C*heers,
-----------------------
Alain Rodriguez - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com







2016-04-13 15:32 GMT+02:00 Bhuvan Rawal <bhu1ra...@gmail.com>:

> This could be because of the way you have configured the policy, have a
> look at the below links for configuring the policy
>
> https://datastax.github.io/python-driver/api/cassandra/policies.html
>
>
> http://stackoverflow.com/questions/22813045/ability-to-write-to-a-particular-cassandra-node
>
> Regards,
> Bhuvan
>
> On Wed, Apr 13, 2016 at 6:54 PM, Walsh, Stephen <stephen.wa...@aspect.com>
> wrote:
>
>> Hi there,
>>
>> So we have 2 datacenter with 3 nodes each.
>> Replication factor is 3 per DC (so each node has all data)
>>
>> We have an application in each DC that writes that Cassandra DC.
>>
>> Now, due to a miss configuration in our application, we saw that our
>> application in both DC’s where pointing to DC1.
>>
>> As such, all keyspaces and tables where created on DC1.
>> The effect of this is that all reads are now going to DC1 and ignoring DC2
>>
>> WE’ve tried doing , nodetool repair / cleanup – but the reads always go
>> to DC1?
>>
>> Anyone know how to rebalance the tokens over DC’s?
>>
>>
>> Regards
>> Steve
>>
>>
>> P.S I know about this article
>> http://www.datastax.com/dev/blog/balancing-your-cassandra-cluster
>> But its doesn’t answer my question regarding 2 DC’s token balancing
>>
>> This email (including any attachments) is proprietary to Aspect Software,
>> Inc. and may contain information that is confidential. If you have received
>> this message in error, please do not read, copy or forward this message.
>> Please notify the sender immediately, delete it from your system and
>> destroy any copies. You may not further disclose or distribute this email
>> or its attachments.
>>
>
>

Reply via email to