Hi guys, 

I recently ran spark on yarn and found spark didn't set any log4j properties
file in configuration or code. And the log4j logs was writing into stderr
file under ${yarn.nodemanager.log-dirs}/application_${appid}.

I wanna know which side(spark or hadoop) controll the appender? Have found
that related disscussion here:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-logging-strategy-on-YARN-td8751.html,
but I think spark code has changed a lot since then.

Any one could offer some guide? Thanks.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Who-manage-the-log4j-appender-while-running-spark-on-yarn-tp20778.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to