Hello,
I have a multi region cluster with 3 nodes in each data center, ec2 us-east
and and west.  Prior to upgrading to 2.0.2 from 1.2.6, the owns % of each
node was 100%, which made sense because I had a replication factor of 3 for
each data center.  After upgrading to 2.0.2 each node claims to own about
17% of the data now.


:~$ nodetool status
Datacenter: us-west-2
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns   Host ID
        Rack
UN  10.198.20.51   958.16 KB  256     16.9%
 6a40b500-cff4-4513-b26b-ea33048c1590  usw2c
UN  10.198.18.125  776 KB     256     17.0%
 aa746ed1-288c-414f-8d97-65fc867a5bdd  usw2b
UN  10.198.16.92   1.39 MB    256     16.4%
 01989d0b-0f81-411b-a70e-f22f01189542  usw2a
Datacenter: us-east-1
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns   Host ID
        Rack
UN  10.198.0.249   1.11 MB    256     16.3%
 22b30bea-5643-43b5-8d98-6e0eafe4af75  use1b
UN  10.198.4.80    1.22 MB    256     16.4%
 e31aecd5-1eb1-4ddb-85ac-7a4135618b66  use1d
UN  10.198.2.20    137.27 MB  256     17.0%
 3253080f-09b6-47a6-9b66-da3d174d1101  use1c

I checked some of the data on each of the nodes in one column family and
the counts are different across the nodes now.   I'm trying to run a
nodetool repair but it's been running for about 6 hours now.

So a couple of questions:
1.  Any idea why the owns % would have changed from 100% to 17% per node
after upgrade?
2. Is there anything else I can do to get the data back in sync between the
nodes other than nodetool repair?

thanks,
Rob

Reply via email to