Not sure if it’s related, but in our Hadoop configuration we’re also setting 

sc.hadoopConfiguration().set("fs.s3.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem”);

Cheers,
-patrick

From:  Andy Davidson <a...@santacruzintegration.com>
Date:  Friday, 12 February 2016 at 17:34
To:  Igor Berman <igor.ber...@gmail.com>
Cc:  "user @spark" <user@spark.apache.org>
Subject:  Re: newbie unable to write to S3 403 forbidden error

Hi Igor

So I assume you are able to use s3 from spark? 

Do you use rdd.saveAsTextFile() ?

How did you create your cluster? I.E. Did you use the spark-1.6.0/spark-ec2 
script, EMR, or something else?


I tried several version of the url including no luck :-(

The bucket name is ‘com.ps.twitter’. It has a folder ‘son'

We have a developer support contract with amazon how ever our case has been 
unassigned for several days now

Thanks

Andy

P.s. In general debugging permission problems is always difficult from the 
client side. Secure servers do not want to make it easy for hackers

From:  Igor Berman <igor.ber...@gmail.com>
Date:  Friday, February 12, 2016 at 4:53 AM
To:  Andrew Davidson <a...@santacruzintegration.com>
Cc:  "user @spark" <user@spark.apache.org>
Subject:  Re: newbie unable to write to S3 403 forbidden error

 String dirPath = "s3n://s3-us-west-1.amazonaws.com/com.pws.twitter/json” 

not sure, but 
can you try to remove s3-us-west-1.amazonaws.com from path ?

On 11 February 2016 at 23:15, Andy Davidson <a...@santacruzintegration.com> 
wrote:
I am using spark 1.6.0 in a cluster created using the spark-ec2 script. I am 
using the standalone cluster manager

My java streaming app is not able to write to s3. It appears to be some for of 
permission problem. 

Any idea what the problem might be?

I tried use the IAM simulator to test the policy. Everything seems okay. Any 
idea how I can debug this problem?

Thanks in advance

Andy

        JavaSparkContext jsc = new JavaSparkContext(conf);


// I did not include the full key in my email
       // the keys do not contain ‘\’
       // these are the keys used to create the cluster. They belong to the IAM 
user andy
        jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "AKIAJREX");

        jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", 
"uBh9v1hdUctI23uvq9qR");




  private static void saveTweets(JavaDStream<String> jsonTweets, String 
outputURI) {

        jsonTweets.foreachRDD(new VoidFunction2<JavaRDD<String>, Time>() {

            private static final long serialVersionUID = 1L;



            @Override

            public void call(JavaRDD<String> rdd, Time time) throws Exception {

                if(!rdd.isEmpty()) {

    // bucket name is ‘com.pws.twitter’ it has a folder ‘json'

                    String dirPath = 
"s3n://s3-us-west-1.amazonaws.com/com.pws.twitter/json” + "-" + 
time.milliseconds();

                    rdd.saveAsTextFile(dirPath);

                }                

            }

        });

        


Bucket name : com.pws.titter
Bucket policy (I replaced the account id)

{
"Version": "2012-10-17",
"Id": "Policy1455148808376",
"Statement": [
{
"Sid": "Stmt1455148797805",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/andy"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::com.pws.twitter/*"
}
]
}




Reply via email to