Hi,

My understanding is that AM with the driver (in cluster deploy mode) and
executors are simple Java processes with their settings set one by one
while submitting a Spark application for execution and creating
ContainerLaunchContext for launching YARN containers. See
https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala?utf8=%E2%9C%93#L796-L801
for the code that does the settings to properties mapping.

With that I think conf/spark-defaults.conf won't be loaded by itself.

Why don't you set a property and see if it's available on the driver in
cluster deploy mode? That should give you a definitive answer (or at least
get you closer).

Pozdrawiam,
Jacek Laskowski
----
https://about.me/JacekLaskowski
Mastering Spark SQL https://bit.ly/mastering-spark-sql
Spark Structured Streaming https://bit.ly/spark-structured-streaming
Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
Follow me at https://twitter.com/jaceklaskowski

On Wed, Jan 3, 2018 at 7:57 AM, John Zhuge <jzh...@apache.org> wrote:

> Hi,
>
> I am running Spark 2.0.0 and 2.1.1 on YARN in a Hadoop 2.7.3 cluster. Is
> spark-env.sh sourced when starting the Spark AM container or the executor
> container?
>
> Saw this paragraph on https://github.com/apache/spark/blob/master/docs/
> configuration.md:
>
> Note: When running Spark on YARN in cluster mode, environment variables
>> need to be set using the spark.yarn.appMasterEnv.[
>> EnvironmentVariableName] property in your conf/spark-defaults.conf file.
>> Environment variables that are set in spark-env.sh will not be reflected
>> in the YARN Application Master process in clustermode. See the YARN-related
>> Spark Properties
>> <https://github.com/apache/spark/blob/master/docs/running-on-yarn.html#spark-properties>
>>  for
>> more information.
>
>
> Does it mean spark-env.sh will not be sourced when starting AM in cluster
> mode?
> Does this paragraph appy to executor as well?
>
> Thanks,
> --
> John Zhuge
>

Reply via email to