Hi,
I have also observed the same when upgrading to flink 1.11 running in
docker and sending to graphite.
Prior to upgrading the taskmanagers would use the hostname. Since 1.11 they
report their IPs
Sadly I did not find any resolution to my issue:
https://lists.apache.org/thread.html/r620b18d12c08
Have you followed the documentation, specifically this bit?
> In order to use this reporter you must copy
|/opt/flink-metrics-influxdb-1.11.2.jar| into the |plugins/influxdb|
folder of your Flink distribution.
On 10/24/2020 12:17 AM, Vijayendra Yadav wrote:
Hi Team,
for Flink 1.11 Graphite
Ah wait, in 1.11 it should not longer be necessary to explicitly copy
the reporter jar.
Please update your reporter configuration to this:
|metrics.reporter.grph.factory.class:
org.apache.flink.metrics.graphite.GraphiteReporterFactory|
On 10/25/2020 4:00 PM, Chesnay Schepler wrote:
Have you
Hi,
I am learning
https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.2/getting-started/java_walkthrough.html
and wondering if the invoke function is thread safe for:
final int seen = count.getOrDefault(0);count.set(seen + 1);
>From
>https://ci.apache.org/projects/flink/flink-sta
Hello Everyone,
I'm new to flink and i'm trying to upgrade from flink 1.8 to flink 1.11 on
an emr cluster. after upgrading to flink1.11 One of the differences that i
see is i don't get any metrics. I found out that flink 1.11 does not have
*org.apache.flink.metrics.statsd.StatsDReporterFactory* ja
With Flink 1.11 reporters were refactored to plugins, and are now
accessible by default (so you no longer have to bother with copying jars
around).
Your configuration appears to be correct, so I suggest to take a look at
the log files.
On 10/25/2020 9:52 PM, Diwakar Jha wrote:
Hello Everyon
Hello,
I'm trying to create an Accumulator that takes in 2 columns, where each
"value" is therefore a tuple, and results in a tuple of 2 arrays yet no
matter what I try I receive an error trace like the one below.
(Oddly, using almost the same pattern with 1 input column (a Long for
"value") and
Imports:
import java.util.Date
import org.apache.flink.api.scala._
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.api.java.typeutils.RowTypeInfo
import org.apache.flink.table.api.{
DataTypes,
EnvironmentSettings,
TableEnvironment,
TableSchema
}
import org.apach
If I switch accumulate to the following:
def accumulate(acc: MembershipsIDsAcc, value:
org.apache.flink.api.java.tuple.Tuple2[java.lang.Long, java.lang.Boolean]):
Unit = {...}
I instead receive:
Tuple needs to be parameterized by using generics.
org.apache.flink.api.java.typeutils.TypeExtractor.
Hi,
I think you can directly declare `def accumulate(acc: MembershipsIDsAcc,
value1: Long, value2: Boolean)`
Best,
Xingbo
Rex Fenley 于2020年10月26日周一 上午9:28写道:
> If I switch accumulate to the following:
> def accumulate(acc: MembershipsIDsAcc, value:
> org.apache.flink.api.java.tuple.Tuple2[java.
Thanks! Looks like that worked!
Fwiw, the error message is very confusing. Is there any way this can be
improved?
Thanks :)
On Sun, Oct 25, 2020 at 6:42 PM Xingbo Huang wrote:
> Hi,
> I think you can directly declare `def accumulate(acc: MembershipsIDsAcc,
> value1: Long, value2: Boolean)`
>
>
Hi Yun,
Sorry for the late reply - I was doing some reading. As far as i
understand, when incremental checkpointing is enabled, the reported
checkpoint size(metrics/UI) is only the size of the deltas and not the full
state size. I understand that compaction may not get triggered. But, if we
are cre
Thanks for sharing the information with us, Piyush an Lasse.
@Piyush
Thanks for offering the help. IMO, there are currently several problems
that make supporting Flink on Mesos challenging for us.
1. *Lack of Mesos experts.* AFAIK, there are very few people (if not
none) among the activ
13 matches
Mail list logo