Re: Reconstruct object through partial select query

2019-05-13 Thread Shahar Cizer Kobrinsky
Hey Hequn & Fabian, It seems like i found a reasonable way using both Row and my own TypeInfo: - I started by just using my own TypeInfo using your example. So i'm using a serializer which is basically a compound of the original event type serializer as well as a string array serializer

AvroSerializer

2019-05-13 Thread Debasish Ghosh
Hello - I am using Avro based encoding with Flink. I see that Flink has an AvroSerializer that gets used for serializing Avro. Is it possible to provide a custom implementation of the serializer e.g. I want to use MyAvroSerializer instead of AvroSerializer in *all* places. Is there any way to regi

Re: Reconstruct object through partial select query

2019-05-13 Thread Shahar Cizer Kobrinsky
Thanks for looking into it Hequn! I do not have a requirement to use TaggedEvent vs Row. But correct me if I am wrong, creating a Row will require me knowing the internal fields of the original event in compile time, is that correct? I do have a requirement to support a generic original event type

assignTimestampsAndWatermarks not work after KeyedStream.process

2019-05-13 Thread an0
Thanks everyone, I learned a lot through this single thread! On 2019/05/13 07:19:30, Fabian Hueske wrote: > Hi, > > Am Fr., 10. Mai 2019 um 16:55 Uhr schrieb an0 : > > > > Q2: after a, map(A), and map(B) would work fine. Assign watermarks > > > immediatedly after a keyBy() is not a good idea,

Flink and Prometheus setup in K8s

2019-05-13 Thread Wouter Zorgdrager
Hey all, I'm working on a deployment setup with Flink and Prometheus on Kubernetes. I'm running into the following issues: 1) Is it possible to use the default Flink Docker image [1] and enable the Prometheus reporter? Modifying the flink-config.yaml is easy, but somehow the Prometheus reporter j

Re:

2019-05-13 Thread Fabian Hueske
Hi, Am Fr., 10. Mai 2019 um 16:55 Uhr schrieb an0 : > > Q2: after a, map(A), and map(B) would work fine. Assign watermarks > > immediatedly after a keyBy() is not a good idea, because 1) the records > are > > shuffled and it's hard to reasoning about ordering, and 2) you lose the > > KeyedStream

Re: how to count kafka sink number

2019-05-13 Thread Konstantin Knauf
Hi Chong, to my knowledge, neither Flink's built-in metrics nor the metrics of the Kafka producer itself give you this number directly. If your sink is chained (no serialization, no network) to another Flink operator, you could take the numRecordsOut of this operator instead. It will tell you how