t; *Date: *Friday, August 25, 2017 at 4:56 PM
> *To: *Robert Metzger
> *Cc: *Raja Aravapalli , "user@flink.apache.org"
>
> *Subject: *[EXTERNAL] Re: Security Control of running Flink Jobs on Flink
> UI
>
>
>
> bq. introduce a special config flag to disable the
Ability to disable it will be a super helpful.
+1 to the idea.
Regards,
Raja.
From: Ted Yu
Date: Friday, August 25, 2017 at 4:56 PM
To: Robert Metzger
Cc: Raja Aravapalli , "user@flink.apache.org"
Subject: [EXTERNAL] Re: Security Control of running Flink Jobs on Flink UI
bq. i
bq. introduce a special config flag to disable the Cancel functionality
+1
Similar config is used in other project(s) such as hbase.
On Fri, Aug 25, 2017 at 2:54 PM, Robert Metzger wrote:
> Hi Raja,
>
> you can actually disable the UI by setting the port to a negative number.
> The configurat
Hi Raja,
you can actually disable the UI by setting the port to a negative number.
The configuration property is "jobmanager.web.port".
I'm not sure how well this is tested, but from the code it seems that this
is the behavior of Flink.
If that doesn't work, I would propose to add a change to Fli
Hi,
I have started a Flink session/cluster on a existing Hadoop Yarn Cluster using
Flink Yarn-Session, and submitting Flink streaming jobs to it… and everything
works fine.
But, one problem I see with this approach is:
The Flink Yarn-Session is running with a yarn application id. And this
ap