Hello,

We have recently upgraded to Cass 1.2.3 from Cass 1.1.5.  We ran 
sstableupgrades and got the ring on its feet and we are now seeing a new issue.

When we run MapReduce jobs against practically any table we find the following 
errors:

2013-04-09 09:58:47,746 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
the native-hadoop library
2013-04-09 09:58:47,899 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=MAP, sessionId=
2013-04-09 09:58:48,021 INFO org.apache.hadoop.util.ProcessTree: setsid exited 
with exit code 0
2013-04-09 09:58:48,024 INFO org.apache.hadoop.mapred.Task:  Using 
ResourceCalculatorPlugin : 
org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4a48edb5
2013-04-09 09:58:50,475 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-09 09:58:50,477 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: org.apache.thrift.TException: Message length 
exceeded: 106
        at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384)
        at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:390)
        at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:313)
        at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:103)
        at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:444)
        at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:460)
        at 
org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
        at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: org.apache.thrift.TException: Message length exceeded: 106
        at 
org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:393)
        at 
org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
        at org.apache.cassandra.thrift.Column.read(Column.java:528)
        at 
org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:507)
        at org.apache.cassandra.thrift.KeySlice.read(KeySlice.java:408)
        at 
org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12905)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
        at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
        at 
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
        at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:346)
        ... 16 more
2013-04-09 09:58:50,481 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
for the task

The message length listed on each failed job differs (not always 106).  Jobs 
that used to run fine now fail with code compiled against cass 1.2.3 (and work 
fine if compiled against 1.1.5 and run against the 1.2.3 servers in 
production).  I'm using the following setup to configure the job:

  def cassConfig(job: Job) {
    val conf = job.getConfiguration()

    ConfigHelper.setInputRpcPort(conf, "" + 9160)
    ConfigHelper.setInputInitialAddress(conf, Config.hostip)

    ConfigHelper.setInputPartitioner(conf, 
"org.apache.cassandra.dht.RandomPartitioner")
    ConfigHelper.setInputColumnFamily(conf, Config.keyspace, Config.cfname)

    val pred = {
      val range = new SliceRange()
        .setStart("".getBytes("UTF-8"))
        .setFinish("".getBytes("UTF-8"))
        .setReversed(false)
        .setCount(4096 * 1000)

      new SlicePredicate().setSlice_range(range)
    }

    ConfigHelper.setInputSlicePredicate(conf, pred)
  }

The job consists only of a mapper that increments counters for each row and 
associated columns so all I'm really doing is exercising 
ColumnFamilyRecordReader.

Has anyone else seen this?  Is there a workaround/fix to get our jobs running?

Thanks

Reply via email to