Hey Aditya,

Would you mind attaching that last hundred few lines from before the exception 
from the server log to this ticket: 
https://issues.apache.org/jira/browse/CASSANDRA-1724 ?

Thanks,
Stu

-----Original Message-----
From: "Jeremy Hanna" <jeremy.hanna1...@gmail.com>
Sent: Wednesday, November 10, 2010 11:40am
To: user@cassandra.apache.org
Subject: Re: MapReduce/Hadoop in cassandra 0.7 beta3

Aditya,

Can you reproduce the problem locally with "pig -x local myscript.pig"?

Also, moving this message back to the cassandra user list.

On Nov 10, 2010, at 10:47 AM, Aditya Muralidharan wrote:

> Hi,
> 
> I'm still getting the error associated with 
> https://issues.apache.org/jira/browse/CASSANDRA-1700
> I have 7 suse nodes running Cassandra0.7 branch (latest as of the morning of 
> Nov 9). I've loaded 10 rows with one column family(replication factor=4) and 
> 100 super columns. Using the ColumnFamilyInputFormat with mapreduce 
> (LocalJobRunner) to retrieve all the rows gives me the following exception:
> 
> 10/11/10 10:33:15 WARN mapred.LocalJobRunner: job_local_0001
> java.lang.RuntimeException: org.apache.thrift.TApplicationException: Internal 
> error processing get_range_slices
>        at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.maybeInit(ColumnFamilyRecordReader.java:277)
>        at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.computeNext(ColumnFamilyRecordReader.java:292)
>        at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.computeNext(ColumnFamilyRecordReader.java:189)
>        at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
>        at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
>        at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:148)
>        at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:423)
>        at 
> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>        at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> Caused by: org.apache.thrift.TApplicationException: Internal error processing 
> get_range_slices
>        at 
> org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>        at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:724)
>        at 
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:704)
>        at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.maybeInit(ColumnFamilyRecordReader.java:255)
>        ... 11 more
> 
> The server has the following exception:
> ERROR [pool-1-thread-11] 2010-11-10 10:35:58,839 Cassandra.java (line 2876) 
> Internal error processing get_range_slices
> java.lang.AssertionError: 
> (150596448267070854052355226693835429313,18886431880788352792108545029372560769]
>        at 
> org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1200)
>        at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:429)
>        at 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:513)
>        at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.process(Cassandra.java:2868)
>        at 
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
>        at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
> 
> Any help would be appreciated.
> 
> Thanks.
> 
> AD



Reply via email to