Re: Kafka and Flink integration

2017-07-05 Thread Konstantin Knauf
Hi Jürgen, one easy way is to disable the Kryo fallback with env.getConfig().disableGenericTypes(); If it was using Kryo you should see an exception, which also states the class, for which it needs to fallback to Kryo. This fails on the first non-Kryo class though. So depending on the other clas

Re: Kafka and Flink integration

2017-07-05 Thread Jürgen Thomann
Hi Stephan, do you know an easy way to find out if Kryo or POJO is used? I have an Object that would be a POJO, but it has one field that uses an object without a public no argument constructor. As I understood the documentation, this should result in Kryo being used. Thanks, Jürgen On 03.0

Re: Kafka and Flink integration

2017-07-03 Thread Stephan Ewen
Hi Urs! Inside Flink (between Flink operators) - Kryo is not a problem, but types must be registered up front for good performance - Tuples and POJOs are often faster than the types that fall back to Kryo Persistent-storage (HDFS, Kafka, ...) - Kryo is not recommended, because its binary da

Re: Kafka and Flink integration

2017-06-22 Thread Urs Schoenenberger
Hi Greg, do you have a link where I could read up on the rationale behind avoiding Kryo? I'm currently facing a similar decision and would like to get some more background on this. Thank you very much, Urs On 21.06.2017 12:10, Greg Hogan wrote: > The recommendation has been to avoid Kryo where p

Re: Kafka and Flink integration

2017-06-21 Thread Greg Hogan
0) > To: nragon > Cc: user@flink.apache.org > Subject: Re: Kafka and Flink integration > > The recommendation has been to avoid Kryo where possible. > > General data exchange: avro or thrift. > > Flink internal data exchange: POJO (or Tuple, which are slightly faster > though l

Re: Kafka and Flink integration

2017-06-21 Thread Ted Yu
Greg:Can you clarify he last part?Should it be: the concrete type cannot be known ? Original message From: Greg Hogan Date: 6/21/17 3:10 AM (GMT-08:00) To: nragon Cc: user@flink.apache.org Subject: Re: Kafka and Flink integration The recommendation has been to avoid Kryo

Re: Kafka and Flink integration

2017-06-21 Thread Greg Hogan
The recommendation has been to avoid Kryo where possible. General data exchange: avro or thrift. Flink internal data exchange: POJO (or Tuple, which are slightly faster though less readable, and there is an outstanding PR to narrow or close the performance gap). Kryo is useful for types which

Re: Kafka and Flink integration

2017-06-21 Thread nragon
So, serialization between producer application -> kafka -> flink kafka consumer will use avro, thrift or kryo right? From there, the remaining pipeline can just use standard pojo serialization, which would be better? -- View this message in context: http://apache-flink-user-mailing-list-archive

Re: Kafka and Flink integration

2017-06-20 Thread Stephan Ewen
Hi! For general data exchange between systems, it is often good to have a more standard format. Being able to evolve the schema of types is very helpful if you evolve the data pipeline (which almost always happens eventually). For that reason, Avro and Thrift are very popular for that type of dat

RE: Kafka and Flink integration

2017-06-20 Thread nragon
Just one more question :). Considering I'm producing into kafka with other application other than flink, which serializer should i use in order to use pojo types when consuming those same messages (now in flink)? -- View this message in context: http://apache-flink-user-mailing-list-archive.233

RE: Kafka and Flink integration

2017-06-20 Thread nragon
Thanks, I'll try to refactor into POJOs. -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-and-Flink-integration-tp13792p13879.html Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

RE: Kafka and Flink integration

2017-06-20 Thread Tzu-Li (Gordon) Tai
Yes, POJOs can contain other nested POJO types. You just have to make sure that the nested field is either public, or has a corresponding public getter- and setter- method that follows the Java beans naming conventions. On 21 June 2017 at 12:20:31 AM, nragon (nuno.goncal...@wedotechnologies.com

RE: Kafka and Flink integration

2017-06-20 Thread nragon
Can i have pojo has composition of other pojo? My custom object has many dependencies and in order to refactor it I must also change another 5 classes as well. -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-and-Flink-integration-tp1379

RE: Kafka and Flink integration

2017-06-20 Thread Nuno Rafael Goncalves
e.org Subject: Re: Kafka and Flink integration I can only repeat what Gordon wrote on Friday: "It’s usually always recommended to register your classes with Kryo [using registerKryoType()], to avoid the somewhat inefficient classname writing. Also, depending on the case, to decrease serializatio

RE: Kafka and Flink integration

2017-06-20 Thread Tzu-Li (Gordon) Tai
. -Original Message- From: Nico Kruber [mailto:n...@data-artisans.com] Sent: 20 de junho de 2017 16:04 To: user@flink.apache.org Cc: Nuno Rafael Goncalves Subject: Re: Kafka and Flink integration No, this is only necessary if you want to register a custom serializer itself [1]. Also

Re: Kafka and Flink integration

2017-06-20 Thread Nico Kruber
D370] > > > > > > > > -Original Message- > From: Nico Kruber [mailto:n...@data-artisans.com] > Sent: 20 de junho de 2017 16:04 > To: user@flink.apache.org > Cc: Nuno Rafael Goncalves > Subject: Re: Kafka and Flink integration > > > &

RE: Kafka and Flink integration

2017-06-20 Thread Nuno Rafael Goncalves
to tuple just for de/serialization process? According to jfr analysis, kryo methods are hit a lot. [cid:image003.jpg@01D2E9E1.26D2D370] -Original Message- From: Nico Kruber [mailto:n...@data-artisans.com] Sent: 20 de junho de 2017 16:04 To: user@flink.apache.org Cc: Nuno Rafael Goncalves Su

Re: Kafka and Flink integration

2017-06-20 Thread Nico Kruber
No, this is only necessary if you want to register a custom serializer itself [1]. Also, in case you are wondering about registerKryoType() - this is only needed as a performance optimisation. What exactly is your problem? What are you trying to solve? (I can't read JFR files here, and from what

Re: Kafka and Flink integration

2017-06-20 Thread nragon
Attaching jfr flight_recording_10228245112.jfr -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-and-Flink-integration-

Re: Kafka and Flink integration

2017-06-20 Thread nragon
Do I need to use registerTypeWithKryoSerializer() in my execution environment? My serialization into kafka is done with the following snippet try (ByteArrayOutputStream byteArrayOutStream = new ByteArrayOutputStream(); Output output = new Output(byteArrayOutStream)) { Kryo kryo = new Kryo();

Re: Kafka and Flink integration

2017-06-16 Thread nragon
My custom object is used across all job, so it'll be part of checkpoints. Can you point me some references with some examples? -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-and-Flink-integration-tp13792p13802.html Sent from the Apache

Re: Kafka and Flink integration

2017-06-16 Thread Tzu-Li (Gordon) Tai
Hi! It’s usually always recommended to register your classes with Kryo, to avoid the somewhat inefficient classname writing. Also, depending on the case, to decrease serialization overhead, nothing really beats specific custom serialization. So, you can also register specific serializers for Kr

Re: kafka and flink integration issue

2016-02-29 Thread Stephan Ewen
Good to hear. Thanks for letting us know! On Mon, Feb 29, 2016 at 8:14 PM, Pankaj Kumar wrote: > yes versioning was issue . Job is working fine on flink 0.10.2. > > On Mon, Feb 29, 2016 at 3:15 PM, Stephan Ewen wrote: > >> Hi! >> >> A "NoSuchMethodError" is always a sign of a version mixup. Ple

Re: kafka and flink integration issue

2016-02-29 Thread Pankaj Kumar
yes versioning was issue . Job is working fine on flink 0.10.2. On Mon, Feb 29, 2016 at 3:15 PM, Stephan Ewen wrote: > Hi! > > A "NoSuchMethodError" is always a sign of a version mixup. Please make > sure both versions (cluster and client) are exactly the same. > > Stephan > > > On Sat, Feb 27,

Re: kafka and flink integration issue

2016-02-29 Thread Stephan Ewen
Hi! A "NoSuchMethodError" is always a sign of a version mixup. Please make sure both versions (cluster and client) are exactly the same. Stephan On Sat, Feb 27, 2016 at 11:05 AM, Pankaj Kumar wrote: > Yes Robert , > i was trying to start Flink on cluster 0.10.1. > > But after changing flink v

Re: kafka and flink integration issue

2016-02-27 Thread Pankaj Kumar
Yes Robert , i was trying to start Flink on cluster 0.10.1. But after changing flink version to 0.10.1 , also i am getting the same error. On Sat, Feb 27, 2016 at 2:47 PM, Robert Metzger wrote: > Hi Pankaj, > > I suspect you are trying to start Flink on a cluster with Flink 0.10.1 > installed?

Re: kafka and flink integration issue

2016-02-27 Thread Robert Metzger
Hi Pankaj, I suspect you are trying to start Flink on a cluster with Flink 0.10.1 installed? On Sat, Feb 27, 2016 at 9:20 AM, Pankaj Kumar wrote: > I am trying to integrate kafka and flink. > my pom file is where {flink.version} is 0.10.2 > > > org.apache.flink > flink-java > ${fli