Re: StatsD metric name prefix change for task manager after upgrading to Flink 1.11

2020-10-25 Thread Nikola Hrusov
Hi, I have also observed the same when upgrading to flink 1.11 running in docker and sending to graphite. Prior to upgrading the taskmanagers would use the hostname. Since 1.11 they report their IPs Sadly I did not find any resolution to my issue: https://lists.apache.org/thread.html/r620b18d12c08

Re: FLINK 1.11 Graphite Metrics

2020-10-25 Thread Chesnay Schepler
Have you followed the documentation, specifically this bit? > In order to use this reporter you must copy |/opt/flink-metrics-influxdb-1.11.2.jar| into the |plugins/influxdb| folder of your Flink distribution. On 10/24/2020 12:17 AM, Vijayendra Yadav wrote: Hi Team, for Flink 1.11 Graphite

Re: FLINK 1.11 Graphite Metrics

2020-10-25 Thread Chesnay Schepler
Ah wait, in 1.11 it should not longer be necessary to explicitly copy the reporter jar. Please update your reporter configuration to this: |metrics.reporter.grph.factory.class: org.apache.flink.metrics.graphite.GraphiteReporterFactory| On 10/25/2020 4:00 PM, Chesnay Schepler wrote: Have you

Re: Rich Function Thread Safety

2020-10-25 Thread Lian Jiang
Hi, I am learning https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.2/getting-started/java_walkthrough.html and wondering if the invoke function is thread safe for: final int seen = count.getOrDefault(0);count.set(seen + 1); >From >https://ci.apache.org/projects/flink/flink-sta

how to enable metrics in Flink 1.11

2020-10-25 Thread Diwakar Jha
Hello Everyone, I'm new to flink and i'm trying to upgrade from flink 1.8 to flink 1.11 on an emr cluster. after upgrading to flink1.11 One of the differences that i see is i don't get any metrics. I found out that flink 1.11 does not have *org.apache.flink.metrics.statsd.StatsDReporterFactory* ja

Re: how to enable metrics in Flink 1.11

2020-10-25 Thread Chesnay Schepler
With Flink 1.11 reporters were refactored to plugins, and are now accessible by default (so you no longer have to bother with copying jars around). Your configuration appears to be correct, so I suggest to take a look at the log files. On 10/25/2020 9:52 PM, Diwakar Jha wrote: Hello Everyon

Failing to create Accumulator over multiple columns

2020-10-25 Thread Rex Fenley
Hello, I'm trying to create an Accumulator that takes in 2 columns, where each "value" is therefore a tuple, and results in a tuple of 2 arrays yet no matter what I try I receive an error trace like the one below. (Oddly, using almost the same pattern with 1 input column (a Long for "value") and

Re: Failing to create Accumulator over multiple columns

2020-10-25 Thread Rex Fenley
Imports: import java.util.Date import org.apache.flink.api.scala._ import org.apache.flink.api.common.typeinfo.TypeInformation import org.apache.flink.api.java.typeutils.RowTypeInfo import org.apache.flink.table.api.{ DataTypes, EnvironmentSettings, TableEnvironment, TableSchema } import org.apach

Re: Failing to create Accumulator over multiple columns

2020-10-25 Thread Rex Fenley
If I switch accumulate to the following: def accumulate(acc: MembershipsIDsAcc, value: org.apache.flink.api.java.tuple.Tuple2[java.lang.Long, java.lang.Boolean]): Unit = {...} I instead receive: Tuple needs to be parameterized by using generics. org.apache.flink.api.java.typeutils.TypeExtractor.

Re: Failing to create Accumulator over multiple columns

2020-10-25 Thread Xingbo Huang
Hi, I think you can directly declare `def accumulate(acc: MembershipsIDsAcc, value1: Long, value2: Boolean)` Best, Xingbo Rex Fenley 于2020年10月26日周一 上午9:28写道: > If I switch accumulate to the following: > def accumulate(acc: MembershipsIDsAcc, value: > org.apache.flink.api.java.tuple.Tuple2[java.

Re: Failing to create Accumulator over multiple columns

2020-10-25 Thread Rex Fenley
Thanks! Looks like that worked! Fwiw, the error message is very confusing. Is there any way this can be improved? Thanks :) On Sun, Oct 25, 2020 at 6:42 PM Xingbo Huang wrote: > Hi, > I think you can directly declare `def accumulate(acc: MembershipsIDsAcc, > value1: Long, value2: Boolean)` > >

Re: Rocksdb - Incremental vs full checkpoints

2020-10-25 Thread Sudharsan R
Hi Yun, Sorry for the late reply - I was doing some reading. As far as i understand, when incremental checkpointing is enabled, the reported checkpoint size(metrics/UI) is only the size of the deltas and not the full state size. I understand that compaction may not get triggered. But, if we are cre

Re: [SURVEY] Remove Mesos support

2020-10-25 Thread Xintong Song
Thanks for sharing the information with us, Piyush an Lasse. @Piyush Thanks for offering the help. IMO, there are currently several problems that make supporting Flink on Mesos challenging for us. 1. *Lack of Mesos experts.* AFAIK, there are very few people (if not none) among the activ