Re: [DISCUSS] Flink 1.6 features

2018-06-04 Thread Antoine Philippot
Hi Stephen, Is it planned to consider this ticket https://issues.apache.org/jira/browse/FLINK-7883 about an atomic cancel-with-savepoint ? It is my main concern about Flink and I have to maintain a fork myself as we can't afford dupplicate events due to reprocess of messages between a savepoint a

Re: RichAsyncFunction in scala

2017-12-28 Thread Antoine Philippot
it = ??? > } > > – Ufuk > > > > On Thu, Dec 28, 2017 at 4:37 PM, Antoine Philippot > wrote: > > Hi, > > > > It lacks a version of RichAsyncFunction class in the scala API or the > > possibility to handle a class which extends AbstractRichFunction and &

RichAsyncFunction in scala

2017-12-28 Thread Antoine Philippot
Hi, It lacks a version of RichAsyncFunction class in the scala API or the possibility to handle a class which extends AbstractRichFunction and implements AsyncFunction (from the scala API). I made a small dev on our current flink fork because we need to use the open method to add our custom metri

Re: Non-intrusive way to detect which type is using kryo ?

2017-11-28 Thread Antoine Philippot
Hi Kien, The only way I found is to add this line at the beginning of the application to detect kryo serialization : `com.esotericsoftware.minlog.Log.set(Log.LEVEL_DEBUG)` Antoine Le mar. 28 nov. 2017 à 02:41, Kien Truong a écrit : > Hi, > > Are there any way to only log when Kryo serializer i

Re: Avoid duplicate messages while restarting a job for an application upgrade

2017-10-20 Thread Antoine Philippot
l request out of it. > > Piotrek > > On 9 Oct 2017, at 15:56, Antoine Philippot > wrote: > > Thanks for your advices Piotr. > > Firstly, yes, we are aware that even with clean shutdown we can end up > with duplicated messages after a crash and it is acceptable as is it

Regression for dataStream.rescale method from 1.2.1 to 1.3.2

2017-10-13 Thread Antoine Philippot
Hi, After migrating our project from flink 1.2.1 to flink 1.3.2, we noticed a big performance drop due to a bad vertices balancing between task manager. In our use case, we set the default parallelism to the number of task managers : val stream: DataStream[Array[Byte]] = env.addSource(new Flink

Re: Avoid duplicate messages while restarting a job for an application upgrade

2017-10-09 Thread Antoine Philippot
ean shutdown you can end up with > duplicated messages after a crash and there is no way around this with > Kafka 0.9. > > Piotrek > > On Oct 2, 2017, at 5:30 PM, Antoine Philippot > wrote: > > Thanks Piotr for your answer, we sadly can't use kafka 0.11 for now

Re: Avoid duplicate messages while restarting a job for an application upgrade

2017-10-02 Thread Antoine Philippot
d support Kafka 0.9), but it is not currently being actively > developed. > > Piotr Nowojski > > On Oct 2, 2017, at 3:35 PM, Antoine Philippot > wrote: > > Hi, > > I'm working on a flink streaming app with a kafka09 to kafka09 use case > which handles around 100

Avoid duplicate messages while restarting a job for an application upgrade

2017-10-02 Thread Antoine Philippot
Hi, I'm working on a flink streaming app with a kafka09 to kafka09 use case which handles around 100k messages per seconds. To upgrade our application we used to run a flink cancel with savepoint command followed by a flink run with the previous saved savepoint and the new application fat jar as