If YARN log aggregation is enabled then logs will be moved to HDFS. You can
use yarn logs -applicationId to view those logs.
On Wed, Jul 20, 2016 at 8:58 AM, Ted Yu wrote:
> What's the value for yarn.log-aggregation.retain-seconds
> and yarn.log-aggregation-enable ?
>
> Which hadoop release are
What's the value for yarn.log-aggregation.retain-seconds
and yarn.log-aggregation-enable ?
Which hadoop release are you using ?
Thanks
On Tue, Jul 19, 2016 at 3:23 PM, Rachana Srivastava <
rachana.srivast...@markmonitor.com> wrote:
> I am trying to find the root cause of recent Spark applicatio
I am trying to find the root cause of recent Spark application failure in
production. When the Spark application is running I can check NodeManager's
yarn.nodemanager.log-dir property to get the Spark executor container logs.
The container has logs for both the running Spark applications
Here i
I'm using spark 0.8.1, and trying to run a job from a new remote client (it
works fine when run directly from the master).
When I try and run it, the job just fails without doing anything.
Unfortunately, I also can't find anywhere were it tells me why it fails.
I'll add the bits of the logs belo