We're seeing very strange behaviour after decommissioning a node: when
requesting a get_range_slices with a KeyRange by token, we are getting back
tokens that are out of range.
As a result, ColumnFamilyRecordReader gets confused, since it uses the last
token from the result set to set the start token of the next batch.  We now
have never-ending Hadoop tasks (well, we kill the job when it goes well past
100%).  Has anyone else seen this?

Reply via email to