This issue is resolved. Don't know the exact root cause though. Did a re-image of the server which was taking less token ownership and done the configuration through chef.
Thanks, Rameez On Sat, May 17, 2014 at 1:06 AM, Rameez Thonnakkal <[email protected]>wrote: > Hello > > I am having a 4 node cluster where 2 nodes are in one data center and > another 2 in a different one. > > But in the first data center the token ownership is not equally > distributed. I am using vnode feature. > > num_tokens is set to 256 in all nodes. > initial_number is left blank. > > Datacenter: DC1 > ================ > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host > ID Rack > UN 10.145.84.167 84.58 MB 256 * 0.4% * > ce5ddceb-b1d4-47ac-8d85-249aa7c5e971 RAC1 > UN 10.145.84.166 692.69 MB 255 44.2% > e6b5a0fd-20b7-4bf9-9a8e-715cfc823be6 RAC1 > Datacenter: DC2 > ================ > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host > ID Rack > UN 10.168.67.43 476 MB 256 27.8% > 05dc7ea6-0328-43b8-8b70-bcea856ba41e RAC1 > UN 10.168.67.42 413.15 MB 256 27.7% > 677025f0-780c-45dc-bb3b-17ad260fba7d RAC1 > > > done nodetool repair couple of times, but it didn't help. > > In the node where less ownership there, I have seen a frequent full GC > occurring couple of times and had to restart cassandra. > > > Any suggestions on how to resolve this is highly appreciated. > > Regards, > Rameez > >
