After some discussions with Hadoop guys, I got how the mechanism works. If we don't add -Dlog4j.configuration into java options to the container(AM or executors), they will use log4j.properties(if any) under container's classpath(extraClasspath plus yarn.application.classpath).
If we wanna custom our log4j configuration, we should add "spark.executor.extraJavaOptions=-Dlog4j.configuration=/path/to/log4j.properties" or "spark.yarn.am.extraJavaOptions=-Dlog4j.configuration=/path/to/log4j.properties" in spark-defaults.conf file. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778p20818.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org