The PrometheusReporter acts as a scraping target for a single process.
If you already have setup something in the Flink cluster that allows
Prometheus/ServiceMonitor to scrape (Flink) metrics, then it shouldn't
be necessary.
It doesn't coordinate with other services in any way; it just has acces
Hi All,
>From 1.9 version there is no *flink-shaded-hadoop2 dependency. To use
Hadoop APIS like *IntWritable , LongWritable. What are the dependencies we
need to add to use these APIs.
I tried searching in google. Not able to understand the solution. Please
guide me.
Thanks in Advance.
Dinesh.
Hi Eleanore,
>From my experience, collecting the Flink metrics to prometheus via metrics
collector is a more ideal way. It is
also easier to configure the alert.
Maybe you could use "fullRestarts" or "numRestarts" to monitor the job
restarting. More metrics could be find
here[2].
[1].
https://ci.
Adding metrics to individual RichMapFunction implementation classes would
give metrics information about that particular class.
As a pipeline consists of multiple such classes, how can we have metrics
for the overall data pipeline?Are there any best practices for it?
With regards
You could create an abstract class that extends AbstractRichFunction,
and all your remaining functions extend that class and implement the
respective (Map/etc.)Function interface.
On 06/08/2020 13:20, Manish G wrote:
Adding metrics to individual RichMapFunction implementation classes
would giv
We still offer a flink-shaded-hadoop-2 artifact that you can find on the
download page: https://flink.apache.org/downloads.html#additional-components
In 1.9 we changed the artifact name.
Note that we will not release newer versions of this dependency.
As for providing Hadoop class, there is som
Hello,
We are using Apache Flink 1.10.1 version. During our security scans following
issues are reported by our scan tool.
Please let us know your comments on these dependency vulnerabilities.
Thanks,
Suchithra
-Original Message-
From: m...@gsuite.cloud.apache.org On Behalf Of
Apache
log4j - If you don't use a Socket appender, you're good. Otherwise, you
can replace the log4j jars in lib/ with a newer version. You could also
upgrade to 1.11.1 which uses log4j2.
guava - We do not use Guava for serialization AFAIK. We also do not use
Java serialization for records.
commons
Hi to all,
in my current job server I submit jobs to the cluster setting up an SSH
session with the JobManager host and running the bin/flink run command
remotely (since the jar is put in the flink-web-upload directory).
Unfortunately, this approach makes very difficult to caputre all exceptions
th
Hi Yang,
Thanks a lot for the information!
Eleanore
On Thu, Aug 6, 2020 at 4:20 AM Yang Wang wrote:
> Hi Eleanore,
>
> From my experience, collecting the Flink metrics to prometheus via metrics
> collector is a more ideal way. It is
> also easier to configure the alert.
> Maybe you could use "
If there is a "’Sink: Unnamed" operator using pure SQL, I think we should
improve this to give a meaningful operator name.
On Tue, 4 Aug 2020 at 21:39, godfrey he wrote:
> I think we assign a meaningful name to sink Transformation
> like other Transformations in StreamExecLegacySink/BatchExecLe
Hi Team,
How can I override flink default conf/flink-conf.yaml from *flink run*
command with custom alternative path.
Also, when we override flink-conf.yaml, should it contain all variables
which are present in flink default conf/flink-conf.yaml or i can just
override selective variables from user
Hi all,
Was there any change in how sub-tasks get allocated to TMs, from Flink 1.9 to
1.10?
Specifically for consecutively numbered sub-tasks (e.g. 0, 1, 2) did it become
more or less likely that they’d be allocated to the same Task Manager?
Asking because a workflow that ran fine in 1.9 now h
I am trying to use the State Processor API. Does that require HDFS or a
filesystem?
I wish there was a complete example that ties in both DataSet and
DataStream API, and the State Processor API.
So far I have not been able to get it to work.
Does anybody know where I can find examples of these
Thanks for pointing this out. We had a look - the nodes in our cluster have a
cap of 65K open files and we aren’t breaching 50% per metrics, so I don’t
believe this is the problem.
The connection refused error makes us think it’s some process using a thread
pool for the JobManager hitting capac
hi Flavio,
Maybe you can try env.executeAsync method,
which just submits the job and returns a JobClient.
Best,
Godfrey
Flavio Pompermaier 于2020年8月6日周四 下午9:45写道:
> Hi to all,
> in my current job server I submit jobs to the cluster setting up an SSH
> session with the JobManager host and running
Hi,
> can I override flink default conf/flink-conf.yaml from flink run command
Yes, you could override it by manually exporting the env variable
FLINK_CONF_DIR.
> can just override selective variables from user define flink-conf.yaml file
No. You could use the default conf/flink-conf.yaml and ove
Thank You Yangze.
Sent from my iPhone
> On Aug 6, 2020, at 7:27 PM, Yangze Guo wrote:
>
> Hi,
>
>> can I override flink default conf/flink-conf.yaml from flink run command
> Yes, you could override it by manually exporting the env variable
> FLINK_CONF_DIR.
>
>> can just override selective v
Hi Kostas,
I am trying to migrate our code base to use new ClusterClient method for job
submission.
As you recommending to use new publicEvolving APIs, any doc or link for
reference will be helpful.
Thanks,
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
19 matches
Mail list logo