Set them as environment variable at boot & configure both stacks to call on
that..

Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>



On Fri, Mar 7, 2014 at 9:32 AM, Nicholas Chammas <nicholas.cham...@gmail.com
> wrote:

> On spinning up a Spark cluster in EC2, I'd like to set a few configs that
> will allow me to access files in S3 without having to specify my AWS access
> and secret keys over and over, as described 
> here<http://stackoverflow.com/a/3033403/877069>
> .
>
> The properties are fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey.
>
> Is there a way to set these properties programmatically so that Spark (via
> the shell) and Hadoop (via distcp) are both aware of and use the values?
>
> I don't think SparkConf does what I need because I want Hadoop to also be
> aware of my AWS keys. When I set those properties using conf.set() in
> pyspark, distcp didn't appear to be aware of them.
>
> Nick
>
>
> ------------------------------
> View this message in context: Setting properties in core-site.xml for
> Spark and Hadoop to 
> access<http://apache-spark-user-list.1001560.n3.nabble.com/Setting-properties-in-core-site-xml-for-Spark-and-Hadoop-to-access-tp2402.html>
> Sent from the Apache Spark User List mailing list 
> archive<http://apache-spark-user-list.1001560.n3.nabble.com/>at Nabble.com.
>

Reply via email to