Hi hanjing,

*There may be both flink job and flink-storm in the my cluster, I don't
know the influence about legacy mode.*

> For storm-compatible jobs, because of technical limitations, you need to
use a cluster that supports legacy mode.
   But for Jobs implemented using the Flink-related API, I strongly
recommend using the new mode,
   because it has made huge changes to the old model and you will get a
more timely response if you encounter problems.

Thanks, vino.

jing <hanjingz...@163.com> 于2018年9月11日周二 下午6:02写道:

> Hi Till,
>     legacy mode worked!
>     Thanks a lot. And what's difference between legacy and new? Is there
> any document and release note?
>     There may be both flink job and flink-storm in the my cluster, I don't
> know the influence about legacy mode.
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=Hanjing&uid=hanjingzuzu%40163.com&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg&items=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 9/11/2018 14:43,Till Rohrmann<trohrm...@apache.org>
> <trohrm...@apache.org> wrote:
>
> Hi Hanjing,
>
> I think the problem is that the Storm compatibility layer only works with
> legacy mode at the moment. Please set `mode: legacy` in your
> flink-conf.yaml. I hope this will resolve the problems.
>
> Cheers,
> Till
>
> On Tue, Sep 11, 2018 at 7:10 AM jing <hanjingz...@163.com> wrote:
>
>> Hi vino,
>> Thank you very much.
>> I'll try more tests.
>>
>> Hanjing
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=Hanjing&uid=hanjingzuzu%40163.com&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg&items=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 9/11/2018 11:51,vino yang<yanghua1...@gmail.com>
>> <yanghua1...@gmail.com> wrote:
>>
>> Hi Hanjing,
>>
>> Flink does not currently support TaskManager HA and only supports
>> JobManager HA.
>> In the Standalone environment, once the JobManager triggers a failover,
>> it will also cause cancel and restart for all jobs.
>>
>> Thanks, vino.
>>
>> jing <hanjingz...@163.com> 于2018年9月11日周二 上午11:12写道:
>>
>>> Hi vino,
>>> Thanks a lot.
>>> Besides,  I'm also confused about taskmanager's HA.
>>> There're 2 taskmangaer in my cluster, only one job A worked on
>>> taskmanager A. If taskmangaer A crashed, what happend about my job.
>>> I tried, my job failed, taskmanger B does not take over job A.
>>> Is this right?
>>>
>>> Hanjing
>>>
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=Hanjing&uid=hanjingzuzu%40163.com&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg&items=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>> On 9/11/2018 10:59,vino yang<yanghua1...@gmail.com>
>>> <yanghua1...@gmail.com> wrote:
>>>
>>> Oh, I thought the flink job could not be submitted. I don't know why the
>>> storm's example could not be submitted. Because I have never used it.
>>>
>>> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>>>
>>> Thanks, vino.
>>>
>>> jing <hanjingz...@163.com> 于2018年9月11日周二 上午10:26写道:
>>>
>>>> Hi vino,
>>>> My job mangaer log is as below. I can submit regular flink job to this
>>>> jobmanger, it worked. But the flink-storm example doesn's work.
>>>> Thanks.
>>>> Hanjing
>>>>
>>>> 2018-09-11 18:22:48,937 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - 
>>>> --------------------------------------------------------------------------------
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Starting 
>>>> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
>>>> Date:07.08.2018 @ 13:31:13 UTC)
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  OS 
>>>> current user: hadoop3
>>>> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader      
>>>>                  - Unable to load native-hadoop library for your 
>>>> platform... using builtin-java classes where applicable
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Current 
>>>> Hadoop/Kerberos user: hadoop3
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  JVM: Java 
>>>> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Maximum 
>>>> heap size: 981 MiBytes
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  
>>>> JAVA_HOME: /usr/java/jdk1.8.0_172-amd64
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Hadoop 
>>>> version: 2.7.5
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  JVM 
>>>> Options:
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> -Xms1024m
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> -Xmx1024m
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> -Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> -Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> -Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  Program 
>>>> Arguments:
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> --configDir
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> /home/hadoop3/zh/flink-1.6.0/conf
>>>> 2018-09-11 18:22:49,188 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     
>>>> --executionMode
>>>> 2018-09-11 18:22:49,189 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -     cluster
>>>> 2018-09-11 18:22:49,189 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         -  
>>>> Classpath: 
>>>> /home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-dist_2.11-1.6.0.jar:::
>>>> 2018-09-11 18:22:49,189 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - 
>>>> --------------------------------------------------------------------------------
>>>> 2018-09-11 18:22:49,189 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Registered 
>>>> UNIX signal handlers for [TERM, HUP, INT]
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: jobmanager.rpc.address, p-a36-72
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: jobmanager.rpc.port, 6123
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: jobmanager.heap.size, 1024m
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: taskmanager.heap.size, 10240m
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: taskmanager.numberOfTaskSlots, 16
>>>> 2018-09-11 18:22:49,197 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: parallelism.default, 2
>>>> 2018-09-11 18:22:49,198 INFO  
>>>> org.apache.flink.configuration.GlobalConfiguration            - Loading 
>>>> configuration property: rest.port, 8081
>>>> 2018-09-11 18:22:49,207 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Starting 
>>>> StandaloneSessionClusterEntrypoint.
>>>> 2018-09-11 18:22:49,207 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install 
>>>> default filesystem.
>>>> 2018-09-11 18:22:49,214 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Install 
>>>> security context.
>>>> 2018-09-11 18:22:49,237 INFO  
>>>> org.apache.flink.runtime.security.modules.HadoopModule        - Hadoop 
>>>> user set to hadoop3 (auth:SIMPLE)
>>>> 2018-09-11 18:22:49,247 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - 
>>>> Initializing cluster services.
>>>> 2018-09-11 18:22:49,253 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Trying to 
>>>> start actor system at p-a36-72:6123
>>>> 2018-09-11 18:22:49,576 INFO  akka.event.slf4j.Slf4jLogger                 
>>>>                  - Slf4jLogger started
>>>> 2018-09-11 18:22:49,611 INFO  akka.remote.Remoting                         
>>>>                  - Starting remoting
>>>> 2018-09-11 18:22:49,718 INFO  akka.remote.Remoting                         
>>>>                  - Remoting started; listening on addresses 
>>>> :[akka.tcp://flink@p-a36-72:6123]
>>>> 2018-09-11 18:22:49,722 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint         - Actor 
>>>> system started at akka.tcp://flink@p-a36-72:6123
>>>> 2018-09-11 18:22:49,732 WARN  org.apache.flink.configuration.Configuration 
>>>>                  - Config uses deprecated configuration key 
>>>> 'jobmanager.rpc.address' instead of proper key 'rest.address'
>>>> 2018-09-11 18:22:49,737 INFO  org.apache.flink.runtime.blob.BlobServer     
>>>>                  - Created BLOB server storage directory 
>>>> /tmp/blobStore-62c16996-0f38-43ae-9e40-ac4206329d93
>>>> 2018-09-11 18:22:49,739 INFO  org.apache.flink.runtime.blob.BlobServer     
>>>>                  - Started BLOB server at 0.0.0.0:3706 - max concurrent 
>>>> requests: 50 - max backlog: 1000
>>>> 2018-09-11 18:22:49,749 INFO  
>>>> org.apache.flink.runtime.metrics.MetricRegistryImpl           - No metrics 
>>>> reporter configured, no metrics will be exposed/reported.
>>>> 2018-09-11 18:22:49,751 INFO  
>>>> org.apache.flink.runtime.dispatcher.FileArchivedExecutionGraphStore  - 
>>>> Initializing FileArchivedExecutionGraphStore: Storage directory 
>>>> /tmp/executionGraphStore-fecb7e34-9d33-4af2-a623-ee96d8572800, expiration 
>>>> time 3600000, maximum cache size 52428800 bytes.
>>>> 2018-09-11 18:22:49,766 INFO  
>>>> org.apache.flink.runtime.blob.TransientBlobCache              - Created 
>>>> BLOB cache storage directory 
>>>> /tmp/blobStore-c1d1946d-9e19-40b1-800d-42598900e253
>>>> 2018-09-11 18:22:49,771 WARN  org.apache.flink.configuration.Configuration 
>>>>                  - Config uses deprecated configuration key 
>>>> 'jobmanager.rpc.address' instead of proper key 'rest.address'
>>>> 2018-09-11 18:22:49,772 WARN  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Upload 
>>>> directory 
>>>> /tmp/flink-web-c1cc0dde-0f4b-458e-84ba-841f405f3c78/flink-web-upload does 
>>>> not exist, or has been deleted externally. Previously uploaded files are 
>>>> no longer available.
>>>> 2018-09-11 18:22:49,772 INFO  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Created 
>>>> directory 
>>>> /tmp/flink-web-c1cc0dde-0f4b-458e-84ba-841f405f3c78/flink-web-upload for 
>>>> file uploads.
>>>> 2018-09-11 18:22:49,774 INFO  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Starting 
>>>> rest endpoint.
>>>> 2018-09-11 18:22:49,905 INFO  
>>>> org.apache.flink.runtime.webmonitor.WebMonitorUtils           - Determined 
>>>> location of main cluster component log file: 
>>>> /home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
>>>> 2018-09-11 18:22:49,905 INFO  
>>>> org.apache.flink.runtime.webmonitor.WebMonitorUtils           - Determined 
>>>> location of main cluster component stdout file: 
>>>> /home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.out
>>>> 2018-09-11 18:22:49,997 INFO  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Rest 
>>>> endpoint listening at p-a36-72:8081
>>>> 2018-09-11 18:22:49,997 INFO  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - 
>>>> http://p-a36-72:8081 was granted leadership with 
>>>> leaderSessionID=00000000-0000-0000-0000-000000000000
>>>> 2018-09-11 18:22:49,997 INFO  
>>>> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint    - Web 
>>>> frontend listening at http://p-a36-72:8081.
>>>> 2018-09-11 18:22:50,004 INFO  
>>>> org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Starting 
>>>> RPC endpoint for 
>>>> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at 
>>>> akka://flink/user/resourcemanager .
>>>> 2018-09-11 18:22:50,045 INFO  
>>>> org.apache.flink.runtime.rpc.akka.AkkaRpcService              - Starting 
>>>> RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher 
>>>> at akka://flink/user/dispatcher .
>>>> 2018-09-11 18:22:50,055 INFO  
>>>> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager  - 
>>>> ResourceManager akka.tcp://flink@p-a36-72:6123/user/resourcemanager was 
>>>> granted leadership with fencing token 00000000000000000000000000000000
>>>> 2018-09-11 18:22:50,055 INFO  
>>>> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - 
>>>> Starting the SlotManager.
>>>> 2018-09-11 18:22:50,064 INFO  
>>>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher      - Dispatcher 
>>>> akka.tcp://flink@p-a36-72:6123/user/dispatcher was granted leadership with 
>>>> fencing token 00000000-0000-0000-0000-000000000000
>>>> 2018-09-11 18:22:50,064 INFO  
>>>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher      - Recovering 
>>>> all persisted jobs.
>>>> 2018-09-11 18:22:55,316 INFO  
>>>> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - 
>>>> Registering TaskManager c82e0b779fe62e3b3e6efda75c97cd18 under 
>>>> c19ad7a1b58afdfc79b0fbbf08d43653 at the SlotManager.
>>>>
>>>>
>>>> Hanjing
>>>>
>>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=Hanjing&uid=hanjingzuzu%40163.com&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg&items=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>>> On 9/11/2018 10:14,vino yang<yanghua1...@gmail.com>
>>>> <yanghua1...@gmail.com> wrote:
>>>>
>>>> Hi Hanjing,
>>>>
>>>> Is your JobManager working properly? Can you share your JobManager log?
>>>>
>>>> Thanks, vino.
>>>>
>>>> jing <hanjingz...@163.com> 于2018年9月11日周二 上午10:06写道:
>>>>
>>>>> Hi vino,
>>>>>
>>>>>        I tried change "localhost" to the real IP. But still throw
>>>>> exception as below. JobManager configuration is as below.
>>>>>
>>>>>
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Hanjing
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> flink-conf.yaml:
>>>>>
>>>>> jobmanager.rpc.address: 170.0.0.46
>>>>>
>>>>>
>>>>>
>>>>> # The RPC port where the JobManager is reachable.
>>>>>
>>>>>
>>>>>
>>>>> jobmanager.rpc.port: 6123
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> # The heap size for the JobManager JVM
>>>>>
>>>>>
>>>>>
>>>>> jobmanager.heap.size: 1024m
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> # The heap size for the TaskManager JVM
>>>>>
>>>>>
>>>>>
>>>>> taskmanager.heap.size: 10240m
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> # The number of task slots that each TaskManager offers. Each slot
>>>>> runs one parallel pipeline.
>>>>>
>>>>>
>>>>>
>>>>> taskmanager.numberOfTaskSlots: 16
>>>>>
>>>>>
>>>>>
>>>>> # The parallelism used for programs that did not specify and other
>>>>> parallelism.
>>>>>
>>>>>
>>>>>
>>>>> parallelism.default: 2
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Excepiton log:
>>>>>
>>>>>
>>>>>
>>>>> Starting execution of program
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------------------------------
>>>>>
>>>>> The program finished with the following exception:
>>>>>
>>>>>
>>>>>
>>>>> org.apache.flink.client.program.ProgramInvocationException: The main
>>>>> method caused an error.
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
>>>>>
>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>
>>>>>         at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>>>>>
>>>>>         at
>>>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
>>>>>
>>>>> Caused by: java.lang.RuntimeException: Could not connect to Flink
>>>>> JobManager with address 170.0.0.46:6123
>>>>>
>>>>>         at
>>>>> org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)
>>>>>
>>>>>         at
>>>>> org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)
>>>>>
>>>>>         at
>>>>> org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)
>>>>>
>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>
>>>>>         at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>>>
>>>>>         at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>
>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>
>>>>>         at
>>>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>>>>>
>>>>>         ... 12 more
>>>>>
>>>>> Caused by: java.io.IOException: Actor at akka.tcp://
>>>>> flink@170.0.0.46:6123/user/jobmanager not reachable. Please make sure
>>>>> that the actor is running and its port is reachable.
>>>>>
>>>>>         at
>>>>> org.apache.flink.runtime.akka.AkkaUtils$.getActorRef(AkkaUtils.scala:547)
>>>>>
>>>>>         at
>>>>> org.apache.flink.runtime.akka.AkkaUtils.getActorRef(AkkaUtils.scala)
>>>>>
>>>>>         at
>>>>> org.apache.flink.storm.api.FlinkClient.getJobManager(FlinkClient.java:339)
>>>>>
>>>>>         at
>>>>> org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:278)
>>>>>
>>>>>         ... 19 more
>>>>>
>>>>> Caused by: akka.actor.ActorNotFound: Actor not found for:
>>>>> ActorSelection[Anchor(akka.tcp://flink@170.0.0.46:6123/),
>>>>> Path(/user/jobmanager)]
>>>>>
>>>>>         at
>>>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:68)
>>>>>
>>>>>         at
>>>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
>>>>>
>>>>>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>>>>>
>>>>>         at
>>>>> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>>>>>
>>>>>         at
>>>>> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
>>>>>
>>>>>         at
>>>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:76)
>>>>>
>>>>>         at
>>>>> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
>>>>>
>>>>>         at
>>>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:75)
>>>>>
>>>>>         at
>>>>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>>>>>
>>>>>         at
>>>>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>>>>>
>>>>>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:534)
>>>>>
>>>>>         at
>>>>> akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:97)
>>>>>
>>>>>         at
>>>>> akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:982)
>>>>>
>>>>>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>>>>>
>>>>>         at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:446)
>>>>>
>>>>>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>>>>>
>>>>>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>>>>>
>>>>>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>>>>>
>>>>>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>>>>>
>>>>>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>>>>>
>>>>>         at
>>>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>>
>>>>>         at
>>>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>>>
>>>>>         at
>>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>>
>>>>>         at
>>>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>>
>>>>>
>>>>> On 9/10/2018 20:17,vino yang<yanghua1...@gmail.com>
>>>>> <yanghua1...@gmail.com> wrote:
>>>>>
>>>>> Hi Hanjing,
>>>>>
>>>>> OK, I mean you change the "localhost" to the real IP.
>>>>>
>>>>> Try it.
>>>>>
>>>>> Thanks, vino.
>>>>>
>>>>> jing <hanjingz...@163.com> 于2018年9月10日周一 下午8:07写道:
>>>>>
>>>>>> Hi vino,
>>>>>> jonmanager rpc address value is setted by localhost.
>>>>>> hadoop3@p-a36-72 is the node host the jobmanager jvm.
>>>>>>
>>>>>> Thanks.
>>>>>> Hanjing
>>>>>>
>>>>>>
>>>>>>
>>>>>> jing
>>>>>> 邮箱hanjingz...@163.com
>>>>>>
>>>>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=jing&uid=hanjingzuzu%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>>>>>
>>>>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>>>>>
>>>>>> On 09/10/2018 19:25, vino yang <yanghua1...@gmail.com> wrote:
>>>>>> Hi Hanjing,
>>>>>>
>>>>>> I mean this configuration key.[1]
>>>>>>
>>>>>> What's more, Is the "hadoop3@p-a36-72" also the node which host
>>>>>> JobManager's jvm process?
>>>>>>
>>>>>> Thanks, vino.
>>>>>>
>>>>>> [1]:
>>>>>> https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address
>>>>>>
>>>>>> jing <hanjingz...@163.com> 于2018年9月10日周一 下午6:57写道:
>>>>>>
>>>>>>> Hi vino,
>>>>>>>   I commit the job on the jvm code with the command below.
>>>>>>> hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>>>>>>> WordCount-StormTopology.jar input output
>>>>>>>
>>>>>>> And I'm a new user, which configuation name should be set. All the
>>>>>>> configuations are the default setting now.
>>>>>>>
>>>>>>> Thanks.
>>>>>>> Hanjing
>>>>>>>
>>>>>>> jing
>>>>>>> 邮箱hanjingz...@163.com
>>>>>>>
>>>>>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=jing&uid=hanjingzuzu%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>>>>>>
>>>>>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>>>>>>
>>>>>>> On 09/10/2018 15:49, vino yang <yanghua1...@gmail.com> wrote:
>>>>>>> Hi Hanjing,
>>>>>>>
>>>>>>> Did you perform a CLI commit on the JM node? Is the address bound to
>>>>>>> "localhost" in the Flink JM configuration?
>>>>>>>
>>>>>>> Thanks, vino.
>>>>>>>
>>>>>>> jing <hanjingz...@163.com> 于2018年9月10日周一 上午11:00写道:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>>        I’m trying to run flink-storm-example on standalone
>>>>>>>> clusters. But there’s some exception I can’t sovle. Could anyone
>>>>>>>> please help me with trouble.
>>>>>>>>
>>>>>>>>        flink-storm-example version: 1.60
>>>>>>>>
>>>>>>>>        flink version: 1.60
>>>>>>>>
>>>>>>>>        The log below is the Exception. My job manager status is as
>>>>>>>> the picture.
>>>>>>>>
>>>>>>>>        I’v tried to changed the IP address and port, but it doesn’t
>>>>>>>> ’work.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>        Thanks a lot.
>>>>>>>>
>>>>>>>> -------------------------------------------
>>>>>>>>
>>>>>>>> [hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>>>>>>>> WordCount-StormTopology.jar input output
>>>>>>>>
>>>>>>>> Starting execution of program
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------
>>>>>>>>
>>>>>>>>  The program finished with the following exception:
>>>>>>>>
>>>>>>>>
>>>>>>>> org.apache.flink.client.program.ProgramInvocationException: The
>>>>>>>> main method caused an error.
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
>>>>>>>>
>>>>>>>>         at java.security.AccessController.doPrivileged(Native
>>>>>>>> Method)
>>>>>>>>
>>>>>>>>         at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
>>>>>>>>
>>>>>>>> Caused by: java.lang.RuntimeException: Could not connect to Flink
>>>>>>>> JobManager with address localhost:6123
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)
>>>>>>>>
>>>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>>>> Method)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>>>>
>>>>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>>>>>>>>
>>>>>>>>         ... 12 more
>>>>>>>>
>>>>>>>> Caused by: java.io.IOException: Actor at 
>>>>>>>> akka.tcp://flink@localhost:6123/user/jobmanager
>>>>>>>> not reachable. Please make sure that the actor is running and its port 
>>>>>>>> is
>>>>>>>> reachable.
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.runtime.akka.AkkaUtils$.getActorRef(AkkaUtils.scala:547)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.runtime.akka.AkkaUtils.getActorRef(AkkaUtils.scala)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.storm.api.FlinkClient.getJobManager(FlinkClient.java:339)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:278)
>>>>>>>>
>>>>>>>>         ... 19 more
>>>>>>>>
>>>>>>>> Caused by: akka.actor.ActorNotFound: Actor not found for:
>>>>>>>> ActorSelection[Anchor(akka.tcp://flink@localhost:6123/),
>>>>>>>> Path(/user/jobmanager)]
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:68)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:76)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:75)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>>>>>>>>
>>>>>>>>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:534)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:97)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:982)
>>>>>>>>
>>>>>>>>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> akka.remote.EndpointActor.aroundReceive(Endpoint.scala:446)
>>>>>>>>
>>>>>>>>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>>>>>>>>
>>>>>>>>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>>>>>>>>
>>>>>>>>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>>>>>>>>
>>>>>>>>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>>>>>>>>
>>>>>>>>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>>>>>
>>>>>>>>         at
>>>>>>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hanjing
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>

Reply via email to