> ava.lang.RuntimeException: UnavailableException()

Looks like the pig script could talk to one node, but the coordinator could not 
process the request at the consistency level requested. Check all the nodes are 
up, that the RF is set to the correct value and the CL you are using. 

Cheers

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 30/04/2013, at 4:55 AM, Miguel Angel Martin junquera 
<mianmarjun.mailingl...@gmail.com> wrote:

> 
> 
> hi all:
> 
> i can run pig with cassandra and hadoop in EC2.
> 
> I ,m trying to run pig with cassandra ring  and hadoop 
> The ring cassandra have the tasktrackers and datanodes , too. 
> 
> and i running pig from another machine where i have intalled the 
> namenode-jobtracker.
> ihave a simple script to load data ffrom pygmalion keyspace adn columfalimily 
> account and dump result to test.
> I  installed another simple local cassandra  in namenode-job tacker machine  
> and i can run pig jobs ok, but when i try to run script  in cassandra ring 
> config changig the config of envitronment variable  PIG_INITIAL_ADDRESS to 
> the IP of one of the nodes of cassandra ring i have this error:
> 
> 
> ---
> 
> 
> j
> ava.lang.RuntimeException: UnavailableException()
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:390)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:313)
>       at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>       at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:184)
>       at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:226)
>       at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
>       at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
>       at 
> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>       at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
>       at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: UnavailableException()
>       at 
> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12924)
>       at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:346)
>       ... 17 more
> 
> 
> 
> can anybody help me o have any idea?
> Thanks in advance
> pd:
> 1.- the ports are open in EC2 
> 2 The keyspace and cF are created in the cassandra cluster  EC2  too nad 
> likey at the name node cassandra installation.
> 3.-i have this bash_profile configuration:
> # .bash_profile
> 
> # Get the aliases and functions
> if [ -f ~/.bashrc ]; then
>         . ~/.bashrc
> fi
> 
> # User specific environment and startup programs
> 
> PATH=$PATH:$HOME/.local/bin:$HOME/bin
> export PATH=$PATH:/usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin
> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
> export CASSANDRA_HOME=/home/ec2-user/apache-cassandra-1.2.4
> export PIG_HOME=/home/ec2-user/pig-0.11.1-src
> export PIG_INITIAL_ADDRESS=10.210.164.233
> #export PIG_INITIAL_ADDRESS=127.0.0.1
> export PIG_RPC_PORT=9160
> export PIG_CONF_DIR=/home/ec2-user/hadoop-1.1.1/conf
> export PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner
> #export PIG_PARTITIONER=org.apache.cassandra.dht.RandomPartitioner
> 
> 
> 4.- I export all cassandrasjars in the hadoop-env.sh for all nodes of hadoop
> 5.- i have the same error running  PIG in local mode 
> 
> 6.- if i change to ramdonpartioner  
> an reload changes   i have this error:
> 
> java.lang.RuntimeException: InvalidRequestException(why:Start token sorts 
> after end token)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:384)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:390)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:313)
>       at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>       at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:184)
>       at 
> org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:228)
>       at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
>       at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
>       at 
> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>       at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
>       at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: InvalidRequestException(why:Start token sorts after end token)
>       at 
> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12916)
>       at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:346)
>       ... 17 more
> 
> 
> 
> thanks in advance
> 
> note.-i runing script with pig_cassandra and cassandra 1.2.0 
> 

Reply via email to