Great then, I will look into my configuration. Thanks for your help!
Cheers,
Theofilos
On 6/15/2016 2:00 PM, Maximilian Michels wrote:
You should also see TaskManager output in the logs. I just verified
this using Flink 1.0.3 with Hadoop 2.7.1. I executed the Iterate
example and it aggregated
You should also see TaskManager output in the logs. I just verified this
using Flink 1.0.3 with Hadoop 2.7.1. I executed the Iterate example and it
aggregated correctly including the TaskManager logs.
I'm wondering, is there anything in the Hadoop logs of the
Resourcemanager/Nodemanager that could
Hi,
By yarn aggregated log I mean Yarn log aggregation is enabled and the
log I'm referring to is the one returned by `yarn logs -applicationId
`. When running a Spark job for example on the same setup, the yarn
aggregated log contains all the information printed out by the application.
Chee
Please use the `yarn logs -applicationId ` to retrieve the logs. If you
have enabled log aggregation, this will give you all container logs
concatenated.
Cheers,
Max
On Wed, Jun 15, 2016 at 12:24 AM, Theofilos Kakantousis wrote:
> Hi Max,
>
> The runBlocking(..) problem was due to a Netty depen
Hi Max,
The runBlocking(..) problem was due to a Netty dependency issue on my
project, it works fine now :)
To pinpoint the logging issue, I just ran a single flink job on yarn as
per the documentation "./bin/flink run -m yarn-cluster -yn 2
./examples/streaming/Iteration.jar" and I have the
Hi Theofilos,
Flink doesn't send the local client output to the Yarn cluster. I
think this will only change once we move the entire execution of the
Job to the cluster framework. All output of the actual Flink job
should be within the JobManager or TaskManager logs.
There is something wrong with
Hi Robert,
Thanks for the prompt reply. I'm using the IterateExample from Flink
examples. In the yarn log I get entries for the YarnJobManager and
ExecutionGraph, but I was wondering if there is a way to push all the
logging that the client produces into the yarn log. Including the
System.out
Hi Theofilos,
how exactly are you writing the application output?
Are you using a logging framework?
Are you writing the log statements from the open(), map(), invoke() methods
or from some constructors? (I'm asking since different parts are executed
on the cluster and locally).
On Fri, Jun 10, 2
Hi all,
Flink 1.0.3
Hadoop 2.4.0
When running a job on a Flink Cluster on Yarn, the application output is
not included in the Yarn log. Instead, it is only printed in the stdout
from where I run my program. For the jobmanager, I'm using the
log4j.properties file from the flink/conf directory