Hi Deep,

Flink has dropped support for specifying the number of TMs via -n since the
introduction of Flip-6. Since then, Flink will automatically start TMs
depending on the required resources. Hence, there is no need to specify the
-n parameter anymore. Instead, you should specify the parallelism with
which you would like to run your job via the -p option.

Since Flink 1.11.0 there is the option slotmanager.number-of-slots.max to
limit the upper limit of slots a cluster is allowed to allocate [1].

[1] https://issues.apache.org/jira/browse/FLINK-16605

Cheers,
Till

On Mon, Jan 4, 2021 at 8:33 AM DEEP NARAYAN Singh <about.d...@gmail.com>
wrote:

> Hi Guys,
>
> I’m struggling while initiating the task manager with flink 1.11.0 in AWS
> EMR but with older versions it is not. Let me put the full context here.
>
> *When using Flink 1.9.1 and EMR 5.29.0*
>
> To create a long running session, we used the below command.
>
> *sudo flink-yarn-session -n <Number of TM> -s <Number of slot> -jm <memory>
> -tm <memory> -d*
>
> and followed by below command to run the final job.
>
> *flink run -m yarn-cluster -yid <flink sessionId> -yn <Number of TM> -ys
> <Number of slot> -yjm <memory> -ytm <memory> -c <ClassName> <Jar Path>*
>
> and if “n” is 6 then it is used to create 6 task managers to start the job,
> so whatever “n” is configured the result was that number of TM the job is
> being started.
>
> But Now when we scaled up with the configuration (*i.e. Flink 1.11.0 and
> EMR 6.1.0*) we are unable to achieve the desired values for TM.
>
> Please find the session Ids of new configuration,
>
> *sudo flink-yarn-session -Djobmanager.memory.process.size=<Memory in GB>
> -Dtaskmanager.memory.process.size=<Memory in GB> -n <no of TM> -s <No of
> slot/core> -d*
>
> And the final Job command
>
> *flink run -m yarn-cluster -yid <Flink sessionId> -c <ClassName> <Jar
> Path>*
>
> I have tried a lot of combinations, but nothing worked out so far. I
> request your help in this regard as the plan to have this configuration in
> *PRODUCTION* soon.
>
> Thanks in advance.
>
>
> Regards,
>
> -Deep
>

Reply via email to