Also you can set hadoop conf through jsc.hadoopConf property. Do a dir (sc)
to see exact property name
On 15 Sep 2015 22:43, "Gourav Sengupta" <gourav.sengu...@gmail.com> wrote:

> Hi,
>
> If you start your EC2 nodes with correct roles (default in most cases
> depending on your needs) you should be able to work on S3 and all other AWS
> resources without giving any keys.
>
> I have been doing that for some time now and I have not faced any issues
> yet.
>
>
> Regards,
> Gourav
>
>
>
> On Tue, Sep 15, 2015 at 12:54 PM, Cazen <cazen....@gmail.com> wrote:
>
>> Good day junHyeok
>>
>> Did you set HADOOP_CONF_DIR? It seems that spark cannot find AWS key
>> properties
>>
>> If it doesn't work after set, How about export AWS_ACCESS_KEY_ID,
>> AWS_SECRET_ACCESS_KEY before running py-spark shell?
>>
>> BR
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Directly-reading-data-from-S3-to-EC2-with-PySpark-tp24638p24698.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to