We're seeing very strange behaviour after decommissioning a node: when requesting a get_range_slices with a KeyRange by token, we are getting back tokens that are out of range. As a result, ColumnFamilyRecordReader gets confused, since it uses the last token from the result set to set the start token of the next batch. We now have never-ending Hadoop tasks (well, we kill the job when it goes well past 100%). Has anyone else seen this?
- get_range_slices confused about token ranges after decommi... Joost Ouwerkerk
- Re: get_range_slices confused about token ranges afte... Rob Coli
- Re: get_range_slices confused about token ranges ... Joost Ouwerkerk
- Re: get_range_slices confused about token ran... Joost Ouwerkerk
- Re: get_range_slices confused about token ran... Benjamin Black
- Re: get_range_slices confused about token... Joost Ouwerkerk
- Re: get_range_slices confused about ... Jonathan Ellis
- Re: get_range_slices confused ab... Joost Ouwerkerk
- Re: get_range_slices confuse... Jonathan Ellis