Hi Puneet,

Are you submitting the Flink jobs using the "/bin/flink" command line tool
to a cluster in session mode?
Maybe the command line tool is just "fire and forget" submitting the job to
the cluster, that's why the listeners are firing immediately.
Can you try to use "env.executeAsync()" instead of "execute()"? (Sorry, I
don't have time right now to experiment with this myself). In either case,
the command line tool needs to stay connected to the cluster to listen to
the job status.
What probably works is using the Application Mode, instead of Session Mode.
In AppMode, the main() method runs on the cluster.

Best,
Robert


On Wed, Dec 8, 2021 at 4:55 AM Puneet Duggal <puneetduggal1...@gmail.com>
wrote:

> Hi,
>
> I have registered a job listener which notifies slack with JobId on
> successful submission. Also it notifies slack on successful/failed
> Execution. Now this job listener is working as expected when running on
> local IDE  , but somehow behaving unexpectedly when running on a cluster
> i.e. both *onJobSubmitted *and *onJobExecuted *are being called
> simultaneously on submitting a real time data streaming job. Currently,
> jobs are being deployed in session mode.
>
> Thanks,
> Puneet Duggal
>

Reply via email to