Re: Missing Exector Logs From Yarn After Spark Failure

2016-07-19 Thread ayan guha
If YARN log aggregation is enabled then logs will be moved to HDFS. You can use yarn logs -applicationId to view those logs. On Wed, Jul 20, 2016 at 8:58 AM, Ted Yu wrote: > What's the value for yarn.log-aggregation.retain-seconds > and yarn.log-aggregation-enable ? > > Which hadoop release are

Re: Missing Exector Logs From Yarn After Spark Failure

2016-07-19 Thread Ted Yu
What's the value for yarn.log-aggregation.retain-seconds and yarn.log-aggregation-enable ? Which hadoop release are you using ? Thanks On Tue, Jul 19, 2016 at 3:23 PM, Rachana Srivastava < rachana.srivast...@markmonitor.com> wrote: > I am trying to find the root cause of recent Spark applicatio

Missing Exector Logs From Yarn After Spark Failure

2016-07-19 Thread Rachana Srivastava
I am trying to find the root cause of recent Spark application failure in production. When the Spark application is running I can check NodeManager's yarn.nodemanager.log-dir property to get the Spark executor container logs. The container has logs for both the running Spark applications Here i

spark failure

2014-02-24 Thread Nathan Kronenfeld
I'm using spark 0.8.1, and trying to run a job from a new remote client (it works fine when run directly from the master). When I try and run it, the job just fails without doing anything. Unfortunately, I also can't find anywhere were it tells me why it fails. I'll add the bits of the logs belo