Sure, here is the cmdline output:

flink-1.3.2/bin/flink run -c com.corp.flink.KafkaJob quickstart.jar --topic 
InputQ --bootstrap.servers localhost:9092 --zookeeper.connect localhost:2181 
--group.id Consumers -p 5 --detached
Cluster configuration: Standalone cluster with JobManager at 
localhost/127.0.0.1:6123
Using address localhost:6123 to connect to JobManager.
JobManager web interface address http://localhost:8081
Starting execution of program
Submitting job with JobID: 5cc74547361cec2ff9874764dac9ee91. Waiting for job 
completion.
Connected to JobManager at 
Actor[akka.tcp://flink@localhost:6123/user/jobmanager#-100905913] with leader 
session id 00000000-0000-0000-0000-000000000000.
09/28/2017 16:30:40    Job execution switched to status RUNNING.
09/28/2017 16:30:40    Source: Custom Source -> Map(1/5) switched to SCHEDULED 
09/28/2017 16:30:40    Source: Custom Source -> Map(2/5) switched to SCHEDULED 
09/28/2017 16:30:40    Source: Custom Source -> Map(3/5) switched to SCHEDULED 
09/28/2017 16:30:40    Source: Custom Source -> Map(4/5) switched to SCHEDULED 
09/28/2017 16:30:40    Source: Custom Source -> Map(5/5) switched to SCHEDULED 
09/28/2017 16:30:40    Sink: Unnamed(1/5) switched to SCHEDULED 
09/28/2017 16:30:40    Sink: Unnamed(2/5) switched to SCHEDULED 
09/28/2017 16:30:40    Sink: Unnamed(3/5) switched to SCHEDULED 
09/28/2017 16:30:40    Sink: Unnamed(4/5) switched to SCHEDULED 
09/28/2017 16:30:40    Sink: Unnamed(5/5) switched to SCHEDULED 
...

I thought --detach will put the process in the background, and give me back the 
cmdline, but maybe I got the meaning behind this option wrong?

Thank you!







 >-------- Оригинално писмо --------

 >От: Chesnay Schepler ches...@apache.org

 >Относно: Re: how many 'run -c' commands to start?

 >До: user@flink.apache.org

 >Изпратено на: 29.09.2017 18:01



 
> 
 
>    
 
>  
 
>   
 
>   
 
>    The only nodes that matter are those on which the Flink processes,  i.e 
 
>    Job- and TaskManagers, are being run.
 
>     
 
>     To prevent a JobManager node failure from causing the job to fail you 
> have to look into an 
 
>    HA setup.
 
>     (The jobmanager is responsible for distributing/coordinating work)
 
>     
 
>     In case of a TaskManager node failure the job will fail and 
 
>    restart, provided that enough TaskManagers are
 
>     still alive to satisfy the resource requirements of the job.
 
>     
 
>     
 
>     Can you elaborate a bit more what happened when you used the  --detached 
> param?
 
>     
 
>     On 28.09.2017 16:33, r. r. wrote:
 
>     
 
>    
 
>    
 
>    Thank you, Chesnay
 
> to make sure - should the node where the job has been submitted goes down, 
> the processing will continue, I hope?
 
> Do I need to ensure this by configuration?
 
> 
 
> btw I added --detached param to the run cmd, but it didn't go into background 
> process as I would've expected. Am I guessing wrong?
 
> 
 
> Thanks!
 
> Rob
 
> 
 
>  >-------- Оригинално писмо --------
 
> 
 
>  >От: Chesnay Schepler ches...@apache.org
 
> 
 
>  >Относно: Re: how many 'run -c' commands to start?
 
> 
 
>  >До: user@flink.apache.org
 
> 
 
>  >Изпратено на: 28.09.2017 15:05
 
> 
 
>  
 
>  
 
>     
 
>     Hi!
 
>  
 
>     
 
>     
 
>  
 
>     
 
>      
 
>     
 
>     
 
>  
 
>     
 
>     Given a Flink cluster, you would only call `flink run ...` to submit a 
 
>  
 
>     
 
>     
 
>  
 
>     
 
>     job once; for simplicity i would submit it on the node where you started 
 
>  
 
>     
 
>     
 
>  
 
>     
 
>     the cluster. Flink will automatically distribute job across the cluster, 
 
>  
 
>     
 
>     
 
>  
 
>     
 
>     in smaller independent parts known as Tasks.
 
>  
 
>     
 
>     
 
>  
 
>     
 
>      
 
>     
 
>     
 
>  
 
>     
 
>     Regards,
 
>  
 
>     
 
>     
 
>  
 
>     
 
>     Chesnay
 
>  
 
>     
 
>     
 
>  
 
>     
 
>      
 
>     
 
>     
 
>  
 
>     
 
>     On 28.09.2017 08:31, r. r. wrote:
 
>  
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      Hello
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      I successfully ran a job with 'flink run -c', but this is for the local
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      setup.
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      How should i proceed with a cluster? Will flink automagically instantiate
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      the job on all servers - i hope i don't have to start 'flink run -c' on 
> all
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      machines.
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      New to flink and bigdata, so sorry for the probably silly question
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      Thanks!
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>      Rob
 
>  
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>  
 
>     
 
>      
 
>       
 
>      
 
>     
 
>     
 
>    
 
>       
 
>  

Reply via email to