Hi Shammon,
The Flink job doesn't exist after I close the execution environment right?
Can you please try the attached code and see that I am not sharing the file
with any other job? Until I close the running Java application the file
still has an open reference in the code mentioned.
On Thu, Apr
Hi, Viktor
We cannot hide "$internal.application.program-args" currently. Only
configuration options whose key contains "SENSITIVE_KEYS" will be hidden.
So, as an alternative, you might want to try passing some hidden
information through the environment variable[1], and name it
with "SENSITIVE_K
Hi neha
Flink can delete runtime data for a job when it goes to termination. But
for external files such as udf jar files as you mentioned, I think you need
to manage them yourself. The files may be shared between jobs, and can not
be deleted when one flink job exists.
Best,
Shammon FY
On Wed,
Hi Viktor
If you can change your option name contains any following specific
keywords, the flink web ui will hidden the value
// the keys whose values should be hidden
private static final String[] SENSITIVE_KEYS =
new String[] {
"password",
"se
>
> I use Apache Flink for stream processing, and StateFun as a hand-off point
> for the rest of the application.
> It serves well as a bridge between a Flink Streaming job and
> micro-services.
This is essentially how I use it as well, and I would also be sad to see it
sunsetted. It works well;
Hi, community!
Is there a way to filter what options are displayed in Web UI under the Job
Manager -> Configuration tab? Specifically, I need to hide
"$internal.application.program-args". This option is displayed when running an
Application Cluster in Kubernetes.
I use Flink 1.15.
Best regard
Hi Michael,
You are using the right one, it just lacks the support for Opensearch REST
client customization at the moment. It would make sense to provide this
functionality.
Thank you.
Best Regards,
Andriy Redko
MHJ> Hi Andriy,
MHJ> we are currently use the OpensearchSink[1] connector, a
Hi community,
*General setup*We are running flink standalone job on k8s,
We start our job manager and task manager with jar immediately with the
following command:
> /docker-entrypoint.sh standalone-job --host $1 --fromSavepoint
> /opt/flink/shared/savepoints/${SAVEPOINT}/ --allowNonRestoredStat