I do not recall what the "50" means, but IIRC, the 1364152145790 is the unix 
timestamp (in millisecs rather than secs) of the expire time when they _should_ 
go away completely.

perl -e 'print scalar(gmtime(1364152145))'
Sun Mar 24 19:09:05 2013

From: Ben Chobot <be...@instructure.com<mailto:be...@instructure.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Thursday, March 21, 2013 3:21 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: removing old nodes

Ah, well I'll check back in a week then. But for the record, what I meant was 
that nodetool gossipinfo now has entries like:

/10.1.20.201
  STATUS:LEFT,50,1364152145790

Where is shows "50" is where the token used to be, and where it still is on all 
my live nodes. So it appears to me as if all my assassinated nodes now have a 
token of 50. Either way, they don't seem to be bugging the rest of the cluster 
anymore, so thanks again.

On Mar 21, 2013, at 3:05 PM, Alain RODRIGUEZ wrote:

"(And now all sharing token 50? I dunno where that came from.)"

Not sure about what you mean.

"nodetool gossipinfo still shows all the old nodes there"

They must appear with a "left" or "remove" status. Off the top of my head, this 
information will remains 7 days. Not sure about it.




2013/3/21 Ben Chobot <be...@instructure.com<mailto:be...@instructure.com>>
Thanks Alain, this seems to have stopped the log messages, even though nodetool 
gossipinfo still shows all the old nodes there. (And now all sharing token 50? 
I dunno where that came from.) Will they eventually fall away from the cluster, 
or are they there for good?

On Mar 21, 2013, at 11:53 AM, Alain RODRIGUEZ wrote:

Using the unsafeAssassinateEndpoint function with old IPs from JMX should do 
the trick.

This was already discussed in this mailing list, search using 
"unsafeAssassinateEndpoint" as keyword to know all that you need to know about 
it.

Hope you'll be ok after that.

Alain


2013/3/21 Ben Chobot <be...@instructure.com<mailto:be...@instructure.com>>
I've got a 1.1.5 cluster, and a few weeks ago I removed some nodes from it. (I 
was trying to upgrade nodes from AWS' large to xlarge, and for some reason that 
made sense at the time, it seemed better to double my nodes and then 
decommission the smaller ones, rather than to simply rebuild the existing nodes 
serially.)

Now the remaining nodes are all frequently logging that the old, decommissioned 
nodes are dead and that their old token is being removed.... which is great, I 
guess, but why does my cluster know about them at all? Doing a nodetool 
removetoken doesn't work, as the dead nodes don't display in the ring. Is this 
expected behavior after a nodetool decommission? Is maybe something cached that 
I can safely uncache?




Reply via email to