Perfect.
No problem. My Bad. Not really clear.
Thanks !
Le mar. 25 févr. 2020 à 13:45, Xintong Song a
écrit :
> Ah, I misunderstood and thought that you want to keep all your Sink
> instances on the same TM.
>
> If what you want is to have one instance per TM, then as Gary mentioned
> specifying
Ah, I misunderstood and thought that you want to keep all your Sink
instances on the same TM.
If what you want is to have one instance per TM, then as Gary mentioned
specifying "-s 1" at starting the session would be enough, and it should
work with all existing versions above (including) 1.8.
Tha
Hi Gary,
Sorry I was probably not very clear.
Yes that's exactly what I want to hear :)
I use the -s 1 parameter and what I expect to have is one task of my Sink
(one instance in fact) per TM (i.e. per JVM)
That's the current behaviour during my tests but I want to be sure.
Thanks a lot
David
Le
Hi David,
Before with the both n and -s it was not the case.
>
What do you mean by before? At least in 1.8 "-s" could be used to specify
the
number of slots per TM.
how can I be sure that my Sink that uses this lib is in one JVM ?
>
Is it enough that no other parallel instance of your sink run
Hi Xintong,
At the moment I'm using the 1.9.2 with this command:
yarn-session.sh -d *-s 1* -jm 4096 -tm 4096 -qu "XXX" -nm "MyPipeline"
So, after a lot of tests, I've noticed that if I increase the parallelism
of my Custom Sink, each task is embedded into one TS and, the most
important, each on
Depending on your Flink version, the '-n' option might not take effect. It
is removed in the latest release, but before that there were a few versions
where this option is neither removed nor taking effect.
Anyway, as long as you have multiple containers, I don't think there's a
way to make some o
Hi,
Thanks Xintong.
I've noticed than when I use yarn-session.sh with --slots (-s) parameter but
without --container (-n) it creates one task/slot per taskmanager. Before with
the both n and -s it was not the case.
I prefer to use only small container with only one task to scale my pipeline
and
Hi David,
In general, I don't think you can control all parallel subtasks of a
certain task run in the same JVM process with the current Flink.
If you job scale is very small, one thing you might try is to have only one
task manager in the Flink session cluster. You need to make sure the task
man
Hi,
My app is based on a lib that is not thread safe (yet...).
In waiting of the patch has been pushed, how can I be sure that my Sink that
uses this lib is in one JVM ?
Context: I use one Yarn session and send my Flink jobs to this session
Regards,
David