yeah I thought of that but the file made it seem that its environment
specific rather than application specific configurations

Im more interested in the best practices, would you recommend using the
default conf file for this and uploading them to where the application will
be running (remote clusters etc) ?


Regards
Sam

On Fri, Feb 10, 2017 at 9:36 PM, Reynold Xin <r...@databricks.com> wrote:

> You can put them in spark's own conf/spark-defaults.conf file
>
> On Fri, Feb 10, 2017 at 10:35 PM, Sam Elamin <hussam.ela...@gmail.com>
> wrote:
>
>> Hi All,
>>
>>
>> really newbie question here folks, i have properties like my aws access
>> and secret keys in the core-site.xml in hadoop among other properties, but
>> thats the only reason I have hadoop installed which seems a bit of an
>> overkill.
>>
>> Is there an equivalent of core-site.xml for spark so I dont have to
>> reference the HADOOP_CONF_DIR in my spark env.sh?
>>
>> I know I can export env variables for the AWS credentials but other
>> properties that my application might want to use?
>>
>> Regards
>> Sam
>>
>>
>>
>>
>

Reply via email to