Hi everyone,

we are running in some problems with multiple per-job yarn sessions, too.

When we are are starting a per-job yarn session (Flink 1.0, Hadoop 2.4)
with recovery.zookeeper.path.root other than /flink, the yarn session
starts but no job is submitted, and after 1 min or so the session
crashes. I attached the jobmanager log.

In Zookeeper the root-directory is created and child-nodes

leaderlatch
jobgraphs

/flink does also exist, but does not have child nodes.

Everything runs fine, with the default recovery.zookeeper.root.path.

Does anyone have an idea, what is going on?

Cheers,

Konstnatin


On 23.11.2015 17:00, Gwenhael Pasquiers wrote:
> We are not yet using HA in our cluster instances.
> 
> But yes, we will have to change the zookeeper.path.root J
> 
>  
> 
> We package our jobs with their own config folder (we don’t rely on
> flink’s config folder); we can put the maven project name into this
> property then they will have different values J
> 
>  
> 
>  
> 
> *From:*Till Rohrmann [mailto:trohrm...@apache.org]
> *Sent:* lundi 23 novembre 2015 14:51
> *To:* user@flink.apache.org
> *Subject:* Re: YARN High Availability
> 
>  
> 
> The problem is the execution graph handle which is stored in ZooKeeper.
> You can manually remove it via the ZooKeeper shell by simply deleting
> everything below your `recovery.zookeeper.path.root` ZNode. But you
> should be sure that the cluster has been stopped before.
> 
>  
> 
> Do you start the different clusters with different
> `recovery.zookeeper.path.root` values? If not, then you should run into
> troubles when running multiple clusters at the same time. The reason is
> that then all clusters will think that they belong together.
> 
>  
> 
> Cheers,
> 
> Till
> 
>  
> 
> On Mon, Nov 23, 2015 at 2:15 PM, Gwenhael Pasquiers
> <gwenhael.pasqui...@ericsson.com
> <mailto:gwenhael.pasqui...@ericsson.com>> wrote:
> 
> OK, I understand.
> 
> Maybe we are not really using flink as you intended. The way we are
> using it, one cluster equals one job. That way we are sure to isolate
> the different jobs as much as possible and in case of crashes / bugs /
> (etc) can completely kill one cluster without interfering with the other
> jobs.
> 
> That future behavior seems good :-)
> 
> Instead of the manual flink commands, is there to manually delete those
> old jobs before launching my job ? They probably are somewhere in hdfs,
> aren't they ?
> 
> B.R.
> 
> 
> 
> -----Original Message-----
> From: Ufuk Celebi [mailto:u...@apache.org <mailto:u...@apache.org>]
> Sent: lundi 23 novembre 2015 12:12
> To: user@flink.apache.org <mailto:user@flink.apache.org>
> Subject: Re: YARN High Availability
> 
> Hey Gwenhaël,
> 
> the restarting jobs are most likely old job submissions. They are not
> cleaned up when you shut down the cluster, but only when they finish
> (either regular finish or after cancelling).
> 
> The workaround is to use the command line frontend:
> 
> bin/flink cancel JOBID
> 
> for each RESTARTING job. Sorry about the inconvenience!
> 
> We are in an active discussion about addressing this. The future
> behaviour will be that the startup or shutdown of a cluster cleans up
> everything and an option to skip this step.
> 
> The reasoning for the initial solution (not removing anything) was to
> make sure that no jobs are deleted by accident. But it looks like this
> is more confusing than helpful.
> 
> – Ufuk
> 
>> On 23 Nov 2015, at 11:45, Gwenhael Pasquiers
> <gwenhael.pasqui...@ericsson.com
> <mailto:gwenhael.pasqui...@ericsson.com>> wrote:
>>
>> Hi again !
>>
>> On the same topic I'm still trying to start my streaming job with HA.
>> The HA part seems to be more or less OK (I killed the JobManager and
> it came back), however I have an issue with the TaskManagers.
>> I configured my job to have only one TaskManager and 1 slot that does
> [source=>map=>sink].
>> The issue I'm encountering is that other instances of my job appear
> and are in the RESTARTING status since there is only one task slot.
>>
>> Do you know of this, or have an idea of where to look in order to
> understand what's happening ?
>>
>> B.R.
>>
>> Gwenhaël PASQUIERS
>>
>> -----Original Message-----
>> From: Maximilian Michels [mailto:m...@apache.org <mailto:m...@apache.org>]
>> Sent: jeudi 19 novembre 2015 13:36
>> To: user@flink.apache.org <mailto:user@flink.apache.org>
>> Subject: Re: YARN High Availability
>>
>> The docs have been updated.
>>
>> On Thu, Nov 19, 2015 at 12:36 PM, Ufuk Celebi <u...@apache.org
> <mailto:u...@apache.org>> wrote:
>>> I’ve added a note about this to the docs and asked Max to trigger a
> new build of them.
>>>
>>> Regarding Aljoscha’s idea: I like it. It is essentially a shortcut
> for configuring the root path.
>>>
>>> In any case, it is orthogonal to Till’s proposals. That one we need
> to address as well (see FLINK-2929). The motivation for the current
> behaviour was to be rather defensive when removing state in order to not
> loose data accidentally. But it can be confusing, indeed.
>>>
>>> – Ufuk
>>>
>>>> On 19 Nov 2015, at 12:08, Till Rohrmann <trohrm...@apache.org
> <mailto:trohrm...@apache.org>> wrote:
>>>>
>>>> You mean an additional start-up parameter for the `start-cluster.sh`
> script for the HA case? That could work.
>>>>
>>>> On Thu, Nov 19, 2015 at 11:54 AM, Aljoscha Krettek
> <aljos...@apache.org <mailto:aljos...@apache.org>> wrote:
>>>> Maybe we could add a user parameter to specify a cluster name that
> is used to make the paths unique.
>>>>
>>>>
>>>> On Thu, Nov 19, 2015, 11:24 Till Rohrmann <trohrm...@apache.org
> <mailto:trohrm...@apache.org>> wrote:
>>>> I agree that this would make the configuration easier. However, it
> entails also that the user has to retrieve the randomized path from the
> logs if he wants to restart jobs after the cluster has crashed or
> intentionally restarted. Furthermore, the system won't be able to clean
> up old checkpoint and job handles in case that the cluster stop was
> intentional.
>>>>
>>>> Thus, the question is how do we define the behaviour in order to
> retrieve handles and to clean up old handles so that ZooKeeper won't be
> cluttered with old handles?
>>>>
>>>> There are basically two modes:
>>>>
>>>> 1. Keep state handles when shutting down the cluster. Provide a mean
> to define a fixed path when starting the cluster and also a mean to
> purge old state handles. Furthermore, add a shutdown mode where the
> handles under the current path are directly removed. This mode would
> guarantee to always have the state handles available if not explicitly
> told differently. However, the downside is that ZooKeeper will be
> cluttered most certainly.
>>>>
>>>> 2. Remove the state handles when shutting down the cluster. Provide
> a shutdown mode where we keep the state handles. This will keep
> ZooKeeper clean but will give you also the possibility to keep a
> checkpoint around if necessary. However, the user is more likely to lose
> his state when shutting down the cluster.
>>>>
>>>> On Thu, Nov 19, 2015 at 10:55 AM, Robert Metzger
> <rmetz...@apache.org <mailto:rmetz...@apache.org>> wrote:
>>>> I agree with Aljoscha. Many companies install Flink (and its config)
> in a central directory and users share that installation.
>>>>
>>>> On Thu, Nov 19, 2015 at 10:45 AM, Aljoscha Krettek
> <aljos...@apache.org <mailto:aljos...@apache.org>> wrote:
>>>> I think we should find a way to randomize the paths where the HA
> stuff stores data. If users don’t realize that they store data in the
> same paths this could lead to problems.
>>>>
>>>>> On 19 Nov 2015, at 08:50, Till Rohrmann <trohrm...@apache.org
> <mailto:trohrm...@apache.org>> wrote:
>>>>>
>>>>> Hi Gwenhaël,
>>>>>
>>>>> good to hear that you could resolve the problem.
>>>>>
>>>>> When you run multiple HA flink jobs in the same cluster, then you
> don’t have to adjust the configuration of Flink. It should work out of
> the box.
>>>>>
>>>>> However, if you run multiple HA Flink cluster, then you have to set
> for each cluster a distinct ZooKeeper root path via the option
> recovery.zookeeper.path.root in the Flink configuraiton. This is
> necessary because otherwise all JobManagers (the ones of the different
> clusters) will compete for a single leadership. Furthermore, all
> TaskManagers will only see the one and only leader and connect to it.
> The reason is that the TaskManagers will look up their leader at a ZNode
> below the ZooKeeper root path.
>>>>>
>>>>> If you have other questions then don’t hesitate asking me.
>>>>>
>>>>> Cheers,
>>>>> Till
>>>>>
>>>>>
>>>>> On Wed, Nov 18, 2015 at 6:37 PM, Gwenhael Pasquiers
> <gwenhael.pasqui...@ericsson.com
> <mailto:gwenhael.pasqui...@ericsson.com>> wrote:
>>>>> Nevermind,
>>>>>
>>>>>
>>>>>
>>>>> Looking at the logs I saw that it was having issues trying to
> connect to ZK.
>>>>>
>>>>> To make I short is had the wrong port.
>>>>>
>>>>>
>>>>>
>>>>> It is now starting.
>>>>>
>>>>>
>>>>>
>>>>> Tomorrow I’ll try to kill some JobManagers *evil*.
>>>>>
>>>>>
>>>>>
>>>>> Another question : if I have multiple HA flink jobs, are there some
> points to check in order to be sure that they won’t collide on hdfs or ZK ?
>>>>>
>>>>>
>>>>>
>>>>> B.R.
>>>>>
>>>>>
>>>>>
>>>>> Gwenhaël PASQUIERS
>>>>>
>>>>>
>>>>>
>>>>> From: Till Rohrmann [mailto:till.rohrm...@gmail.com
> <mailto:till.rohrm...@gmail.com>]
>>>>> Sent: mercredi 18 novembre 2015 18:01
>>>>> To: user@flink.apache.org <mailto:user@flink.apache.org>
>>>>> Subject: Re: YARN High Availability
>>>>>
>>>>>
>>>>>
>>>>> Hi Gwenhaël,
>>>>>
>>>>>
>>>>>
>>>>> do you have access to the yarn logs?
>>>>>
>>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Till
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Nov 18, 2015 at 5:55 PM, Gwenhael Pasquiers
> <gwenhael.pasqui...@ericsson.com
> <mailto:gwenhael.pasqui...@ericsson.com>> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>>
>>>>>
>>>>> We’re trying to set up high availability using an existing
> zookeeper quorum already running in our Cloudera cluster.
>>>>>
>>>>>
>>>>>
>>>>> So, as per the doc we’ve changed the max attempt in yarn’s config
> as well as the flink.yaml.
>>>>>
>>>>>
>>>>>
>>>>> recovery.mode: zookeeper
>>>>>
>>>>> recovery.zookeeper.quorum: host1:3181,host2:3181,host3:3181
>>>>>
>>>>> state.backend: filesystem
>>>>>
>>>>> state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
>>>>>
>>>>> recovery.zookeeper.storageDir: hdfs:///flink/recovery/
>>>>>
>>>>> yarn.application-attempts: 1000
>>>>>
>>>>>
>>>>>
>>>>> Everything is ok as long as recovery.mode is commented.
>>>>>
>>>>> As soon as I uncomment recovery.mode the deployment on yarn is
> stuck on :
>>>>>
>>>>>
>>>>>
>>>>> “Deploying cluster, current state ACCEPTED”.
>>>>>
>>>>> “Deployment took more than 60 seconds….”
>>>>>
>>>>> Every second.
>>>>>
>>>>>
>>>>>
>>>>> And I have more than enough resources available on my yarn cluster.
>>>>>
>>>>>
>>>>>
>>>>> Do you have any idea of what could cause this, and/or what logs I
> should look for in order to understand ?
>>>>>
>>>>>
>>>>>
>>>>> B.R.
>>>>>
>>>>>
>>>>>
>>>>> Gwenhaël PASQUIERS
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>> <unwanted_jobs.jpg>
> 
>  
> 

-- 
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
2016-03-31 09:01:39,039 INFO  org.apache.flink.yarn.ApplicationMaster                       - YARN daemon runs as yarn setting user to execute Flink ApplicationMaster/JobManager to bigdata
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       - --------------------------------------------------------------------------------
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Starting YARN ApplicationMaster/JobManager (Version: 1.0.0, Rev:94cd554, Date:03.03.2016 @ 08:34:27 UTC)
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Current user: yarn
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       -  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.7/24.51-b03
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Maximum heap size: 377 MiBytes
2016-03-31 09:01:39,043 INFO  org.apache.flink.yarn.ApplicationMaster                       -  JAVA_HOME: /usr/java/default
2016-03-31 09:01:39,044 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Hadoop version: 2.4.1
2016-03-31 09:01:39,044 INFO  org.apache.flink.yarn.ApplicationMaster                       -  JVM Options:
2016-03-31 09:01:39,044 INFO  org.apache.flink.yarn.ApplicationMaster                       -     -Xmx424M
2016-03-31 09:01:39,044 INFO  org.apache.flink.yarn.ApplicationMaster                       -     -Dlog.file=/opt/app/bigdata/hadoop/yarn/log/application_1459325529804_0055/container_1459325529804_0055_01_000001/jobmanager.log
2016-03-31 09:01:39,044 INFO  org.apache.flink.yarn.ApplicationMaster                       -     -Dlogback.configurationFile=file:logback.xml
2016-03-31 09:01:39,045 INFO  org.apache.flink.yarn.ApplicationMaster                       -     -Dlog4j.configuration=file:log4j.properties
2016-03-31 09:01:39,045 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Program Arguments: (none)
2016-03-31 09:01:39,045 INFO  org.apache.flink.yarn.ApplicationMaster                       -  Classpath: /opt/app/bigdata/hadoop/yarn/local/usercache/bigdata/appcache/application_1459325529804_0055/container_1459325529804_0055_01_000001/flink.jar:/etc/hadoop/conf:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-annotations.jar:/usr/lib/hadoop/hadoop-nfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-annotations-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-common-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop/hadoop-auth-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/postgresql-9.1-901-1.jdbc4.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/ambari-log4j-1.6.1.98.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jackson-core-2.2.3.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/lib/hadoop-mapreduce/metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.jar:/usr/lib/hadoop-mapreduce/jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/usr/lib/hadoop-mapreduce/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/junit-4.10.jar:/usr/lib/hadoop-mapreduce/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/hadoop-archives.jar:/usr/lib/hadoop-mapreduce/commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-auth-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/hadoop-sls-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/hadoop-extras-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jetty-6.1.26.jar:/usr/lib/hadoop-mapreduce/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar
2016-03-31 09:01:39,045 INFO  org.apache.flink.yarn.ApplicationMaster                       - --------------------------------------------------------------------------------
2016-03-31 09:01:39,046 INFO  org.apache.flink.yarn.ApplicationMaster                       - Registered UNIX signal handlers for [TERM, HUP, INT]
2016-03-31 09:01:39,061 INFO  org.apache.flink.yarn.ApplicationMaster                       - Loading config from: /opt/app/bigdata/hadoop/yarn/local/usercache/bigdata/appcache/application_1459325529804_0055/container_1459325529804_0055_01_000001.
2016-03-31 09:01:39,116 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Starting JobManager
2016-03-31 09:01:39,123 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Starting JobManager actor system at 10.127.68.136:50215
2016-03-31 09:01:39,535 INFO  akka.event.slf4j.Slf4jLogger                                  - Slf4jLogger started
2016-03-31 09:01:39,587 INFO  Remoting                                                      - Starting remoting
2016-03-31 09:01:39,739 INFO  Remoting                                                      - Remoting started; listening on addresses :[akka.tcp://flink@10.127.68.136:50215]
2016-03-31 09:01:39,746 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Starting JobManger web frontend
2016-03-31 09:01:39,750 INFO  org.apache.flink.runtime.util.ZooKeeperUtils                  - Using '/ledf_recovery' as root namespace.
2016-03-31 09:01:39,826 INFO  org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl  - Starting
2016-03-31 09:01:39,842 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-03-31 09:01:39,842 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:host.name=<host3>
2016-03-31 09:01:39,842 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.version=1.7.0_51
2016-03-31 09:01:39,842 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.vendor=Oracle Corporation
2016-03-31 09:01:39,842 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.home=/usr/java/jdk1.7.0_51/jre
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.class.path=/opt/app/bigdata/hadoop/yarn/local/usercache/bigdata/appcache/application_1459325529804_0055/container_1459325529804_0055_01_000001/flink.jar:/etc/hadoop/conf:/usr/lib/hadoop/hadoop-auth.jar:/usr/lib/hadoop/hadoop-nfs.jar:/usr/lib/hadoop/hadoop-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-annotations.jar:/usr/lib/hadoop/hadoop-nfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-annotations-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-common-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop/hadoop-auth-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop/hadoop-common.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/postgresql-9.1-901-1.jdbc4.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/ambari-log4j-1.6.1.98.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jackson-core-2.2.3.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/asm-3.2.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/hadoop-sls.jar:/usr/lib/hadoop-mapreduce/metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/hadoop-auth.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/jettison-1.1.jar:/usr/lib/hadoop-mapreduce/jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/jackson-core-2.2.3.jar:/usr/lib/hadoop-mapreduce/commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/hadoop-extras.jar:/usr/lib/hadoop-mapreduce/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/junit-4.10.jar:/usr/lib/hadoop-mapreduce/zookeeper-3.4.5.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/activation-1.1.jar:/usr/lib/hadoop-mapreduce/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/hadoop-archives.jar:/usr/lib/hadoop-mapreduce/commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/hadoop-datajoin-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-auth-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/hadoop-sls-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/hadoop-archives-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.3.0-563-tests.jar:/usr/lib/hadoop-mapreduce/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/hadoop-extras-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/xz-1.0.jar:/usr/lib/hadoop-mapreduce/hadoop-streaming-2.4.0.2.1.3.0-563.jar:/usr/lib/hadoop-mapreduce/jetty-6.1.26.jar:/usr/lib/hadoop-mapreduce/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.library.path=::/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native::/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.io.tmpdir=/tmp
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:java.compiler=<NA>
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:os.name=Linux
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:os.arch=amd64
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:os.version=2.6.32-504.8.1.el6.x86_64
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:user.name=yarn
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:user.home=/home/yarn
2016-03-31 09:01:39,843 INFO  org.apache.zookeeper.ZooKeeper                                - Client environment:user.dir=/opt/app/bigdata/hadoop/yarn/local/usercache/bigdata/appcache/application_1459325529804_0055/container_1459325529804_0055_01_000001
2016-03-31 09:01:39,844 INFO  org.apache.zookeeper.ZooKeeper                                - Initiating client connection, connectString=<host1>:2181,<host2>:2181,<host3>:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@b067755
2016-03-31 09:01:39,865 INFO  org.apache.zookeeper.ClientCnxn                               - Opening socket connection to server <host2>/10.127.68.135:2181. Will not attempt to authenticate using SASL (unknown error)
2016-03-31 09:01:39,867 INFO  org.apache.zookeeper.ClientCnxn                               - Socket connection established to <host2>/10.127.68.135:2181, initiating session
2016-03-31 09:01:39,873 INFO  org.apache.zookeeper.ClientCnxn                               - Session establishment complete on server <host2>/10.127.68.135:2181, sessionid = 0x25229757cff1afd, negotiated timeout = 40000
2016-03-31 09:01:39,877 INFO  org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager  - State change: CONNECTED
2016-03-31 09:01:40,892 INFO  org.apache.flink.runtime.webmonitor.WebMonitorUtils           - Determined location of JobManager log file: /opt/app/bigdata/hadoop/yarn/log/application_1459325529804_0055/container_1459325529804_0055_01_000001/jobmanager.log
2016-03-31 09:01:40,892 INFO  org.apache.flink.runtime.webmonitor.WebMonitorUtils           - Determined location of JobManager stdout file: /opt/app/bigdata/hadoop/yarn/log/application_1459325529804_0055/container_1459325529804_0055_01_000001/jobmanager.out
2016-03-31 09:01:40,898 INFO  org.apache.flink.runtime.webmonitor.WebRuntimeMonitor         - Using directory /tmp/flink-web-99a7fac8-4887-4514-a7bf-3d8623c93c1d for the web interface files
2016-03-31 09:01:40,898 INFO  org.apache.flink.runtime.webmonitor.WebRuntimeMonitor         - Using directory /tmp/flink-web-upload-34933039-60b7-4c98-9431-7482ed5baa76 for web frontend JAR file uploads
2016-03-31 09:01:41,107 INFO  org.apache.flink.runtime.webmonitor.WebRuntimeMonitor         - Web frontend listening at 0.0.0.0:37113
2016-03-31 09:01:41,108 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Starting JobManager actor
2016-03-31 09:01:41,112 INFO  org.apache.flink.runtime.blob.BlobServer                      - Created BLOB server storage directory /tmp/blobStore-cfc837c1-b351-4564-b73c-d0014cead8c8
2016-03-31 09:01:41,662 INFO  org.apache.flink.runtime.blob.FileSystemBlobStore             - Created blob directory hdfs:///bigdata/flink/recovery/blob.
2016-03-31 09:01:41,662 INFO  org.apache.flink.runtime.blob.BlobServer                      - Started BLOB server at 0.0.0.0:35939 - max concurrent requests: 50 - max backlog: 1000
2016-03-31 09:01:41,671 INFO  org.apache.flink.runtime.util.ZooKeeperUtils                  - Using '/ledf_recovery' as root namespace.
2016-03-31 09:01:41,671 INFO  org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl  - Starting
2016-03-31 09:01:41,672 INFO  org.apache.zookeeper.ZooKeeper                                - Initiating client connection, connectString=<host1>:2181,<host2>:2181,<host3>:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@4155a5c7
2016-03-31 09:01:41,674 INFO  org.apache.zookeeper.ClientCnxn                               - Opening socket connection to server <host3>/10.127.68.136:2181. Will not attempt to authenticate using SASL (unknown error)
2016-03-31 09:01:41,674 INFO  org.apache.zookeeper.ClientCnxn                               - Socket connection established to <host3>/10.127.68.136:2181, initiating session
2016-03-31 09:01:41,676 INFO  org.apache.zookeeper.ClientCnxn                               - Session establishment complete on server <host3>/10.127.68.136:2181, sessionid = 0x351ed39bcbc1a48, negotiated timeout = 40000
2016-03-31 09:01:41,676 INFO  org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager  - State change: CONNECTED
2016-03-31 09:01:41,693 INFO  org.apache.flink.runtime.checkpoint.SavepointStoreFactory     - Using filesystem savepoint backend (root path: hdfs:///bigdata/flink/savepoints).
2016-03-31 09:01:41,701 INFO  org.apache.flink.runtime.jobmanager.MemoryArchivist           - Started memory archivist akka://flink/user/archive
2016-03-31 09:01:41,721 INFO  org.apache.flink.yarn.YarnJobManager                          - Starting JobManager at akka.tcp://flink@10.127.68.136:50215/user/jobmanager.
2016-03-31 09:01:41,721 INFO  org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService  - Starting ZooKeeperLeaderElectionService.
2016-03-31 09:01:41,723 INFO  org.apache.flink.runtime.webmonitor.WebRuntimeMonitor         - Starting with JobManager akka.tcp://flink@10.127.68.136:50215/user/jobmanager on port 37113
2016-03-31 09:01:41,723 INFO  org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService  - Starting ZooKeeperLeaderRetrievalService.
2016-03-31 09:01:41,729 INFO  org.apache.flink.yarn.ApplicationMaster                       - Generate configuration file for application master.
2016-03-31 09:01:41,741 INFO  org.apache.flink.yarn.ApplicationMaster                       - Starting YARN session on Job Manager.
2016-03-31 09:01:41,742 INFO  org.apache.flink.yarn.ApplicationMaster                       - Application Master properly initiated. Awaiting termination of actor system.
2016-03-31 09:01:41,750 INFO  org.apache.flink.yarn.YarnJobManager                          - Start yarn session.
2016-03-31 09:01:41,795 INFO  org.apache.flink.yarn.YarnJobManager                          - Yarn session with 2 TaskManagers. Tolerating 2 failed TaskManagers
2016-03-31 09:01:41,815 INFO  org.apache.hadoop.yarn.client.RMProxy                         - Connecting to ResourceManager at <host1>/10.127.68.134:8030
2016-03-31 09:01:41,838 INFO  org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - yarn.client.max-nodemanagers-proxies : 500
2016-03-31 09:01:41,839 INFO  org.apache.flink.yarn.YarnJobManager                          - Registering ApplicationMaster with tracking url http://<host3>:37113.
2016-03-31 09:01:41,956 INFO  org.apache.flink.yarn.YarnJobManager                          - Retrieved 0 TaskManagers from previous attempts.
2016-03-31 09:01:41,961 INFO  org.apache.flink.yarn.YarnJobManager                          - Requesting initial TaskManager container 0.
2016-03-31 09:01:41,965 INFO  org.apache.flink.yarn.YarnJobManager                          - Requesting initial TaskManager container 1.
2016-03-31 09:01:41,989 INFO  org.apache.flink.yarn.Utils                                   - Copying from file:/opt/app/bigdata/hadoop/yarn/local/usercache/bigdata/appcache/application_1459325529804_0055/container_1459325529804_0055_01_000001/flink-conf-modified.yaml to hdfs://<host1>:8020/user/bigdata/.flink/application_1459325529804_0055/flink-conf-modified.yaml
2016-03-31 09:01:42,190 INFO  org.apache.flink.yarn.YarnJobManager                          - Prepared local resource for modified yaml: resource { scheme: "hdfs" host: "<host1>" port: 8020 file: "/user/bigdata/.flink/application_1459325529804_0055/flink-conf-modified.yaml" } size: 5701 timestamp: 1459407702132 type: FILE visibility: APPLICATION
2016-03-31 09:01:42,194 INFO  org.apache.flink.yarn.YarnJobManager                          - Create container launch context.
2016-03-31 09:01:42,202 INFO  org.apache.flink.yarn.YarnJobManager                          - Starting TM with command=$JAVA_HOME/bin/java -Xms2286m -Xmx2286m -XX:MaxDirectMemorySize=2286m  -Dlog.file="<LOG_DIR>/taskmanager.log" -Dlogback.configurationFile=file:logback.xml -Dlog4j.configuration=file:log4j.properties org.apache.flink.yarn.YarnTaskManagerRunner --configDir . 1> <LOG_DIR>/taskmanager.out 2> <LOG_DIR>/taskmanager.err
2016-03-31 09:01:42,205 INFO  org.apache.flink.yarn.YarnJobManager                          - JobManager akka.tcp://flink@10.127.68.136:50215/user/jobmanager was granted leadership with leader session ID Some(6fcaa70c-e471-42b2-80f8-39a641207944).
2016-03-31 09:01:42,215 INFO  org.apache.flink.yarn.YarnJobManager                          - Delaying recovery of all jobs by 10000 milliseconds.
2016-03-31 09:01:42,221 INFO  org.apache.flink.runtime.webmonitor.JobManagerRetriever       - New leader reachable under akka.tcp://flink@10.127.68.136:50215/user/jobmanager:6fcaa70c-e471-42b2-80f8-39a641207944.
2016-03-31 09:01:42,990 INFO  org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl         - Received new token for : <host2>:45454
2016-03-31 09:01:42,990 INFO  org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl         - Received new token for : <host4>:45454
2016-03-31 09:01:42,996 INFO  org.apache.flink.yarn.YarnJobManager                          - Got new container for allocation: container_1459325529804_0055_01_000002
2016-03-31 09:01:42,996 INFO  org.apache.flink.yarn.YarnJobManager                          - Got new container for allocation: container_1459325529804_0055_01_000003
2016-03-31 09:01:42,997 INFO  org.apache.flink.yarn.YarnJobManager                          - The user requested 2 containers, 0 running. 2 containers missing
2016-03-31 09:01:42,997 INFO  org.apache.flink.yarn.YarnJobManager                          - 2 containers already allocated by YARN. Starting...
2016-03-31 09:01:42,999 INFO  org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - Opening proxy : <host2>:45454
2016-03-31 09:01:43,035 INFO  org.apache.flink.yarn.YarnJobManager                          - Launching container (container_1459325529804_0055_01_000002 on host <host2>).
2016-03-31 09:01:43,038 INFO  org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - Opening proxy : <host4>:45454
2016-03-31 09:01:43,049 INFO  org.apache.flink.yarn.YarnJobManager                          - Launching container (container_1459325529804_0055_01_000003 on host <host4>).
2016-03-31 09:01:44,801 INFO  org.apache.flink.yarn.YarnJobManager                          - Register akka.tcp://flink@10.127.68.134:44656/user/applicationClient as client.
2016-03-31 09:01:46,777 INFO  org.apache.flink.runtime.instance.InstanceManager             - Registered TaskManager at <host4> (akka.tcp://flink@10.127.68.137:38215/user/taskmanager) as 648de54fb9b6b90bbfa3ca8646170e25. Current number of registered hosts is 1. Current number of alive task slots is 4.
2016-03-31 09:01:47,881 INFO  org.apache.flink.runtime.instance.InstanceManager             - Registered TaskManager at <host2> (akka.tcp://flink@10.127.68.135:45126/user/taskmanager) as 1d88ff388424e96bf4055101af9e9918. Current number of registered hosts is 2. Current number of alive task slots is 8.
2016-03-31 09:01:52,231 INFO  org.apache.flink.yarn.YarnJobManager                          - Attempting to recover all jobs.
2016-03-31 09:01:52,233 INFO  org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore  - No job graph to recover.
2016-03-31 09:01:52,234 INFO  org.apache.flink.yarn.YarnJobManager                          - Re-submitting 0 job graphs.
2016-03-31 09:02:51,182 INFO  org.apache.flink.yarn.YarnJobManager                          - Stopping YARN JobManager with status FAILED and diagnostic Flink YARN Client requested shutdown.
2016-03-31 09:02:51,187 INFO  org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl         - Waiting for application to be successfully unregistered.
2016-03-31 09:02:51,291 INFO  org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl  - Interrupted while waiting for queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
	at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:275)
2016-03-31 09:02:51,338 INFO  org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - Closing proxy : <host2>:45454
2016-03-31 09:02:51,339 INFO  org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy  - Closing proxy : <host4>:45454
2016-03-31 09:02:51,342 INFO  org.apache.flink.yarn.YarnJobManager                          - Stopping JobManager akka.tcp://flink@10.127.68.136:50215/user/jobmanager.
2016-03-31 09:02:51,344 INFO  org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService  - Stopping ZooKeeperLeaderElectionService.
2016-03-31 09:02:51,352 INFO  org.apache.zookeeper.ZooKeeper                                - Session: 0x351ed39bcbc1a48 closed
2016-03-31 09:02:51,352 INFO  org.apache.zookeeper.ClientCnxn                               - EventThread shut down
2016-03-31 09:02:51,406 INFO  org.apache.flink.runtime.blob.BlobServer                      - Stopped BLOB server at 0.0.0.0:35939
2016-03-31 09:02:51,409 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Shutting down remote daemon.
2016-03-31 09:02:51,411 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remote daemon shut down; proceeding with flushing remote transports.
2016-03-31 09:02:51,426 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remoting shut down.
2016-03-31 09:02:51,444 INFO  org.apache.flink.yarn.YarnJobManager                          - Shutdown completed. Stopping JVM.
2016-03-31 09:02:51,445 INFO  org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService  - Stopping ZooKeeperLeaderRetrievalService.
2016-03-31 09:02:51,445 INFO  org.apache.flink.runtime.webmonitor.WebRuntimeMonitor         - Removing web dashboard root cache directory /tmp/flink-web-99a7fac8-4887-4514-a7bf-3d8623c93c1d
2016-03-31 09:02:51,447 INFO  org.apache.zookeeper.ZooKeeper                                - Session: 0x25229757cff1afd closed
2016-03-31 09:02:51,447 INFO  org.apache.zookeeper.ClientCnxn                               - EventThread shut down
2016-03-31 09:02:51,448 INFO  org.apache.flink.runtime.webmonitor.StackTraceSampleCoordinator  - Shutting down stack trace sample coordinator.

Reply via email to