Re: Default Flink Metrics Graphite

2020-08-25 Thread Dawid Wysakowicz
Hi Vijay, I think the problem might be that you are using a wrong version of the reporter. You say you used flink-metrics-graphite-1.10.0.jar from 1.10 as a plugin, but it was migrated to plugins in 1.11 only[1]. I'd recommend trying it out with the same 1.11 version of Flink and Graphite report

Re: Default Flink Metrics Graphite

2020-08-25 Thread Vijayendra Yadav
Hi Nikola, To rule out any other cluster issues, I have tried it in my local now. Steps as follows, but don't see any metrics yet. 1) Set up local Graphite docker run -d\ --name graphite\ --restart=always\ -p 80:80\ -p 2003-2004:2003-2004\ -p 2023-2024:2023-2024\ -p 8125:8125/udp\ -p 8126

Re: [ANNOUNCE] Apache Flink 1.10.2 released

2020-08-25 Thread Xingbo Huang
Thanks Zhu for the great work and everyone who contributed to this release! Best, Xingbo Guowei Ma 于2020年8月26日周三 下午12:43写道: > Hi, > > Thanks a lot for being the release manager Zhu Zhu! > Thanks everyone contributed to this! > > Best, > Guowei > > > On Wed, Aug 26, 2020 at 11:18 AM Yun Tang wr

Re: [ANNOUNCE] Apache Flink 1.10.2 released

2020-08-25 Thread Guowei Ma
Hi, Thanks a lot for being the release manager Zhu Zhu! Thanks everyone contributed to this! Best, Guowei On Wed, Aug 26, 2020 at 11:18 AM Yun Tang wrote: > Thanks for Zhu's work to manage this release and everyone who contributed > to this! > > Best, > Yun Tang >

Re: [ANNOUNCE] Apache Flink 1.10.2 released

2020-08-25 Thread Yun Tang
Thanks for Zhu's work to manage this release and everyone who contributed to this! Best, Yun Tang From: Yangze Guo Sent: Tuesday, August 25, 2020 14:47 To: Dian Fu Cc: Zhu Zhu ; dev ; user ; user-zh Subject: Re: [ANNOUNCE] Apache Flink 1.10.2 released Thanks

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Yuval Itzchakov
They are definitely equal, the same JAR is copied in subsequent lines in the Dockerfile. Regarding the NoSuchMethodException, I'll look it up and let you know tomorrow. On Tue, Aug 25, 2020, 22:59 Chesnay Schepler wrote: > The simplest answer is that they are in fact not equal; maybe it is a ja

Re: [DISCUSS] Remove Kafka 0.10.x connector (and possibly 0.11.x)

2020-08-25 Thread Chesnay Schepler
+1 to remove both the 1.10 and 1.11 connectors. The connectors have not been actively developed for some time. They are basically just sitting around causing noise by causing test instabilities and eating CI time. It would  also allow us to really simplify the module structure of the Kafka con

[ANNOUNCE] Weekly Community Update 2020/31-34

2020-08-25 Thread Konstantin Knauf
Dear community, The "weekly" community update is back after a short summer break! This time I've tried to cover most of what happened during the last four weeks, but I might pick up some older topics in the next weeks' updates, too. Activity on the dev@ mailing list has picked up quite a bit as f

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Chesnay Schepler
The simplest answer is that they are in fact not equal; maybe it is a jar of an older version of your setup? Can you give some details on the NoSuchMethodException? Specifically whether it tries to access something from the Kafka connector, or from your own user code. On 25/08/2020 21:27, Yu

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Yuval Itzchakov
OK, I think I figured it out. It looks like the uber-jar is also being placed under `lib`, which is probably the cause of the problem. Question is, why does it identify it as two different versions? It's exactly the same JAR. On Tue, Aug 25, 2020 at 10:22 PM Yuval Itzchakov wrote: > I'm afraid

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Yuval Itzchakov
I'm afraid it's not being printed out due to different log levels :( Yes, I build the image myself. It takes the tar file from https://archive.apache.org/dist/flink/flink-1.9.0/ and unpacks it into the image. I've ran: find . -iname "*.jar" | x

Re: subscribe

2020-08-25 Thread Chesnay Schepler
Please see these instructions on how to subscribe to the mailing lists. On 25/08/2020 20:19, s_penakalap...@yahoo.com wrote: Hi Team, I am interested in Flink. Regards, Sunitha.

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Chesnay Schepler
The NoSuchMethodException shows that the class is still on the classpath, but with a different version than your code is expecting. Otherwise you would've gotten a different error. This implies that there are 2 versions of the kafka dependencies on the classpath in your original run; it suddenly

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Yuval Itzchakov
Will it be enough to provide you the output of `-verbose:class`? Or do you want me to add additional arguments? On Tue, Aug 25, 2020 at 6:20 PM Arvid Heise wrote: > Small correction: you'd bundle the connectors in your uber jar like you > did but you usually don't put it into flink-dist. > > So

Re: [DISCUSS] Removing deprecated methods from DataStream API

2020-08-25 Thread Konstantin Knauf
I would argue that the guarantees of @Public methods that became ineffective were broken when they became ineffective (and were deprecated). - ExecutionConfig#disable/enableSysoutLogging (deprecated in 1.10) - ExecutionConfig#set/isFailTaskOnCheckpointError (deprecated in 1.9) Removing thes

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Yuval Itzchakov
Hi Arvid, I'm running Flink in a job cluster on k8s using the Lyft Operator. The flink image that I'm building does not have the flink-connector-kafka library in it's JAR, I've made sure of this using `jar -tf`. Additionally, once I removed the dependency from my uber jar, it failed with a "NoSuch

Re: [DISCUSS] Remove Kafka 0.10.x connector (and possibly 0.11.x)

2020-08-25 Thread Konstantin Knauf
Hi Aljoscha, I am assuming you're asking about dropping the flink-connector-kafka-0.10/0.11 modules, right? Or are you talking about removing support for Kafka 0.10/0.11 from the universal connector? I am in favor of removing flink-connector-kafka-0.10/0.11 in the next release. These modules woul

Re: Flink Table API/SQL debugging, testability

2020-08-25 Thread Dawid Wysakowicz
Hi, What exactly are you looking for? I think the simplest stub for a test could be sth like:             final TableEnvironment env = TableEnvironment.create(...);             TableResult result = env.fromValues(...)                 .select(...)                 .execute();             try (Clo

Re: How jobmanager and task manager communicates with each other ?

2020-08-25 Thread Andrey Zagrebin
Hi Sidhant, (1) If we are not using Flink's HA services then how we can dynamically > configure task manager nodes to connect to job manager? Any suggestions or > best practices? Not sure what you mean by 'dynamically'. I think you have to restart the task manager with the new configuration to co

Re: Transition Flink job from Java to Scala with state migration

2020-08-25 Thread Andrey Zagrebin
Hi Daksh, You need to find which type causes the problem: Long, MyCustomObject or maybe something else. You could share the logs with full exception stack trace. My guess is that your scala code uses another serializer for the failing type. See also docs to understand serialization in Flink [1] Wh

Re: Flink OnCheckpointRollingPolicy streamingfilesink

2020-08-25 Thread Andrey Zagrebin
Hi Vijay, I think it depends on your job requirements, in particular how many records are processed per second and how much resources you have to process them. If the checkpointing interval is short then the checkpointing overhead can be too high and you need more resources to efficiently keep up

Re: How jobmanager and task manager communicates with each other ?

2020-08-25 Thread sidhant gupta
Hi Till, Thanks for the reply. (1) If we are not using Flink's HA services then how we can dynamically configure task manager nodes to connect to job manager? Any suggestions or best practices? (2) Which and how flink's HA service can be used for the service discovery of job manager ? Regards S

Re: Default Flink Metrics Graphite

2020-08-25 Thread Vijayendra Yadav
Thanks for inputs Nikola. I will check on graphite side. Sent from my iPhone > On Aug 23, 2020, at 9:26 PM, Nikola Hrusov wrote: > >  > Hi Vijay, > > Your steps look correct to me. > Perhaps you can double check that the graphite port you are sending is > correct? THe default carbon port is

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Arvid Heise
Small correction: you'd bundle the connectors in your uber jar like you did but you usually don't put it into flink-dist. So please double-check if it's also in flink-dist and remove it there. If not, then please add the full classpath log statement. It might also be a bug related to restoring an

Re: Loading FlinkKafkaProducer fails with LinkError

2020-08-25 Thread Arvid Heise
Hi Yuval, How do you execute Flink? Can you show us the log entry with the classpath? I'm guessing that you have Kafka bundled in your uber-jar and additionally also have the connector in flink-dist/lib. If so, you simply need to remove it in one place. In general, if you use flink-dist, you'd no

Re: OOM error for heap state backend.

2020-08-25 Thread Andrey Zagrebin
Hi Vishwas, If you use Flink 1.7, check the older memory model docs [1] because you referred to the new memory model of Flink 1.10 in your reference 2. Could you also share a screenshot where you get the state size of 2.5 GB? Do you mean Flink WebUI? Generally, it is quite hard to estimate the on-

Transition Flink job from Java to Scala with state migration

2020-08-25 Thread Daksh Talwar
Hello, We run a Stream API based Flink application on 1.10.0, coded in Java. While moving this job to Scala (reasons unrelated to Flink/this application), we are getting the following error when trying to instantiate Scala job from a savepoint taken in the Java job: *org.apache.flink.util.StateMi

Re: JSON to Parquet

2020-08-25 Thread Dawid Wysakowicz
Hi Averell, If you can describe the JSON schema I'd suggest looking into the SQL API. (And I think you do need to define your schema upfront. If I am not mistaken Parquet must know the common schema.) Then you could do sth like: CREATE TABLE json (     // define the schema of your json data ) WIT

Re: JSON to Parquet

2020-08-25 Thread Dawid Wysakowicz
Hi Averell, If you can describe the JSON schema I'd suggest looking into the SQL API. (And I think you do need to define your schema upfront. If I am not mistaken Parquet must know the common schema.) Then you could do sth like: CREATE TABLE json (     // define the schema of your json data ) WIT

Re: flink1.10中hive module 没有plus,greaterThan等函数

2020-08-25 Thread Andrey Zagrebin
Hi Faaron, This mailing list is for support in English. Could you translate your question into English? You can also subscribe to the user mailing list in Chinese to get support in Chinese [1] Best, Andrey [1] user-zh-subscr...@flink.apache.org On Fri, Aug 21, 2020 at 4:43 AM faaron zheng wrot

Re: Monitor the usage of keyed state

2020-08-25 Thread Andrey Zagrebin
Hi Mu, I would suggest to look into RocksDB metrics which you can enable as Flink metrics [1] Best, Andrey [1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/config.html#rocksdb-native-metrics On Fri, Aug 21, 2020 at 4:27 AM Mu Kong wrote: > Hi community, > > I have a Flink

Example flink run with security options? Running on k8s in my case

2020-08-25 Thread Adam Roberts
Hey everyone, I've been experimenting with Flink using https://github.com/GoogleCloudPlatform/flink-on-k8s-operator and I believe I've successfully deployed a JobManager and TaskManager with security enabled, and a self-signed certificate (the pods come up great).   However, I can't do much with th

Re: [DISCUSS] FLIP-134: DataStream Semantics for Bounded Input

2020-08-25 Thread Aljoscha Krettek
Thanks for creating this FLIP! I think the general direction is very good but I think there are some specifics that we should also put in there and that we may need to discuss here as well. ## About batch vs streaming scheduling I think we shouldn't call it "scheduling", because the decision b

Flink Table API/SQL debugging, testability

2020-08-25 Thread narasimha
I was looking for testability, debugging practices on Flink Table API/SQL. Really difficult to find them when compared to Streaming API. Can someone please share their experiences on debugging, testability. -- A.Narasimha Swamy