On 4 May 2016, at 13:52, Zhang, Jingyu
mailto:jingyu.zh...@news.com.au>> wrote:
Thanks everyone,
One reason to use "s3a//" is because I use "s3a//" in my development env
(Eclipse) on a desktop. I will debug and test on my desktop then put jar file
on EMR Cluster. I do not think "s3//" will w
Thanks everyone,
One reason to use "s3a//" is because I use "s3a//" in my development env
(Eclipse) on a desktop. I will debug and test on my desktop then put jar
file on EMR Cluster. I do not think "s3//" will works on a desktop.
With helping from AWS suport, this bug is cause by the version of
On 3 May 2016 at 17:22, Gourav Sengupta wrote:
> Hi,
>
> The best thing to do is start the EMR clusters with proper permissions in
> the roles that way you do not need to worry about the keys at all.
>
> Another thing, why are we using s3a// instead of s3:// ?
>
Probably because of what's said a
Hi,
The best thing to do is start the EMR clusters with proper permissions in
the roles that way you do not need to worry about the keys at all.
Another thing, why are we using s3a// instead of s3:// ?
Besides that you can increase s3 speeds using the instructions mentioned
here:
https://aws.ama
don't put your secret in the URI, it'll only creep out in the logs.
Use the specific properties coverd in
http://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html,
which you can set in your spark context by prefixing them with spark.hadoop.
you can also set the env vars, AWS
Hi All,
I am using Eclipse with Maven for developing Spark applications. I got a
error for Reading from S3 in Scala but it works fine in Java when I run
them in the same project in Eclipse. The Scala/Java code and the error in
following
Scala
val uri = URI.create("s3a://" + key + ":" + seckey +