Hey all,
I am running flink in batch mode on yarn with independant jobs creating
their own clusters.

 I have a flow defined that scales parallelism based on input size (to keep
overall processing time somewhat constant).  Right now the flow initializes
with around ~22k tasks for a flow that sets parallelism to 1600 for a few
portions.  At the cli I launch flink with:

flink run -m yarn-cluster -yn 200 -ys 4 -yqu test -c bla.. jar

My expectation is that this would create a max cluster of 800 slots from
200 task managers.  When I run with the input that scales to parallel 1600
I end up with over 2832 slots and 708 task managers.  What looks like its
doing is flink is consuming the entirety of the available yarn queue its
assigned to (happened to get ~75% of the queue that last run).

Does -yn act as a suggestion rather than a limit?

Reply via email to