Hello,

I’m trying to wrap my head around task parallelism in a Flink cluster. Let’s 
say I have a cluster of 3 nodes, each node offering 16 task slots, so in total 
I’d have 48 slots for processing. Do the parallel instances of each task get 
distributed across the cluster or is it possible that they all run on the same 
node? If they can all run on the same node, what happens when that node 
crashes? Does the job manager recreate them using the remaining open slots?

Thanks,
Ali

Reply via email to