Hi everyone,

We have a huge execution graph for one streaming job. To update this execution 
graph, we take the snapshot of the job and start the job with snapshot. However 
this can take too much time.


One option is splitting this huge streaming job into smaller ones. We can 
cancel or run new stream jobs (without taking snapshot) instead updating the 
huge one I explained above. However we will end up having 100 - 150 small 
streaming jobs in one cluster.


My question is;

Is it a good practice to run multiple streaming jobs (above 100) in one cluster?


Best,


Ozan.

Reply via email to