Hi all:

I tried to install flink-1.7.2 free hadoop version on Azure with hadoop 2.7.

And when I start to submit a flink job to yarn, like this:

 bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar

Exceptions came out:

 org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
deploy Yarn session cluster
        at
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:423)
        at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:259)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
        at
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by:
org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException:
The YARN application unexpectedly switched to state FAILED during
deployment.
Diagnostics from YARN: Application application_1554108305028_0026 failed 1
times (global limit =5; local limit is =1) due to AM Container for
appattempt_1554108305028_0026_000001 exited with  exitCode: -1000
For more detailed output, check the application tracking page:
http://hn1-xxx/cluster/app/application_1554108305028_0026 Then click on
links to logs of each attempt.
Diagnostics: Resource wasb://
flink-test-xxx.net/user/flink/.flink/application_1554108305028_0026/lib/slf4j-log4j12-1.7.15.jar
changed on src filesystem (expected 1549895572000, was 1554313403000
java.io.IOException: Resource wasb://
flink-test-xxx.net/user/flink/.flink/application_1554108305028_0026/lib/slf4j-log4j12-1.7.15.jar
changed on src filesystem (expected 1549895572000, was 1554313403000
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:257)
        at
org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:359)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:228)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:221)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:209)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)



Can anybody help?

Reply via email to