On 2 May 2016, at 19:24, Gourav Sengupta
mailto:gourav.sengu...@gmail.com>> wrote:
Jorn,
what aspects are you speaking about ?
My response was absolutely pertinent to Jinan because he will not even face the
problem if he used Scala. So it was along the lines of helping a person to
learn fish
r 2016 11:19:08 +0100
Subject: Re: Reading from Amazon S3
From: gourav.sengu...@gmail.com<mailto:gourav.sengu...@gmail.com>
To: ste...@hortonworks.com<mailto:ste...@hortonworks.com>
CC: yuzhih...@gmail.com<mailto:yuzhih...@gmail.com>;
j.r.alhaj...@hotmail.com<mailto:j.r.alh
nonfun$partitions$2.apply(RDD.scala:237)
>>
>> at scala.Option.getOrElse(Option.scala:120)
>>
>> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>>
s$2.apply(RDD.scala:239)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>
>> at scala.Option.getOrElse(Option.scala:120)
>>
>> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>
>> at
>> org.apache.spa
.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>
> at scala.Option.getOrElse(Option.scala:120)
>
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>
> at org.apache.spark.Partitioner$.de
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at
org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
at
org.apache.spark.api.java.JavaPairRDD.reduceByKey(JavaPairRDD.scala:526)Date:
Thu, 28 Apr 2016 11:19:08 +0100
Subject: Re: Reading from Amazon S3
From: gourav.sengu...@gm
Why would you use JAVA (create a problem and then try to solve it)? Have
you tried using Scala or Python or even R?
Regards,
Gourav
On Thu, Apr 28, 2016 at 10:07 AM, Steve Loughran
wrote:
>
> On 26 Apr 2016, at 18:49, Ted Yu wrote:
>
> Looking at the cause of the error, it seems hadoop-aws-xx.
On 26 Apr 2016, at 18:49, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
Looking at the cause of the error, it seems hadoop-aws-xx.jar (corresponding to
the version of hadoop you use) was missing in classpath.
yes, that s3n was moved from hadoop-common to the new hadoop-aws, and without
realising
Looking at the cause of the error, it seems hadoop-aws-xx.jar
(corresponding to the version of hadoop you use) was missing in classpath.
FYI
On Tue, Apr 26, 2016 at 9:06 AM, Jinan Alhajjaj
wrote:
> Hi All,
> I am trying to read a file stored in Amazon S3.
> I wrote this code:
>
> import java.ut