Fix is simply to switch to random partitioner.

On Wednesday, January 30, 2013, Edward Capriolo <edlinuxg...@gmail.com>
wrote:
> This was unexpected fallout fro the change to murmur partitioner. A jira
is open but if you need map red murmers is currently out of the question.
>
> On Wednesday, January 30, 2013, Tejas Patil <tejas.patil...@gmail.com>
wrote:
>> While reading data from Cassandra in map-reduce, I am getting
"InvalidRequestException(why:Start token sorts after end token)"
>> Below is the code snippet that I used and the entire stack trace.
>> (I am using Cassandra 1.2.0 and hadoop 0.20.2)
>> Can you point out the issue here ?
>> Code snippet:
>>    SlicePredicate predicate = new SlicePredicate();
>>     SliceRange sliceRange = new SliceRange();
>>     sliceRange.start = ByteBuffer.wrap(("1".getBytes()));
>>     sliceRange.finish = ByteBuffer.wrap(("1000000".getBytes()));
>>     sliceRange.reversed = false;
>>     //    predicate.slice_range = sliceRange;
>>     List<ByteBuffer> colNames = new ArrayList<ByteBuffer>();
>>     colNames.add(ByteBuffer.wrap("url".getBytes()));
>>     colNames.add(ByteBuffer.wrap("Parent".getBytes()));
>>     predicate.column_names = colNames;
>>     ConfigHelper.setInputSlicePredicate(job.getConfiguration(),
predicate);
>> Full stack trace:
>> java.lang.RuntimeException: InvalidRequestException(why:Start token
sorts after end token)
>> at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384)
>> at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:390)
>> at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:313)
>> at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>> at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>> at
org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:184)
>> at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>> at
org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:
>>

Reply via email to