Hi Stephen,
Is it planned to consider this ticket
https://issues.apache.org/jira/browse/FLINK-7883 about an atomic
cancel-with-savepoint ?
It is my main concern about Flink and I have to maintain a fork myself as
we can't afford dupplicate events due to reprocess of messages between a
savepoint a
it = ???
> }
>
> – Ufuk
>
>
>
> On Thu, Dec 28, 2017 at 4:37 PM, Antoine Philippot
> wrote:
> > Hi,
> >
> > It lacks a version of RichAsyncFunction class in the scala API or the
> > possibility to handle a class which extends AbstractRichFunction and
&
Hi,
It lacks a version of RichAsyncFunction class in the scala API or the
possibility to handle a class which extends AbstractRichFunction and
implements AsyncFunction (from the scala API).
I made a small dev on our current flink fork because we need to use the
open method to add our custom metri
Hi Kien,
The only way I found is to add this line at the beginning of the
application to detect kryo serialization :
`com.esotericsoftware.minlog.Log.set(Log.LEVEL_DEBUG)`
Antoine
Le mar. 28 nov. 2017 à 02:41, Kien Truong a
écrit :
> Hi,
>
> Are there any way to only log when Kryo serializer i
l request out of it.
>
> Piotrek
>
> On 9 Oct 2017, at 15:56, Antoine Philippot
> wrote:
>
> Thanks for your advices Piotr.
>
> Firstly, yes, we are aware that even with clean shutdown we can end up
> with duplicated messages after a crash and it is acceptable as is it
Hi,
After migrating our project from flink 1.2.1 to flink 1.3.2, we noticed a
big performance drop due to a bad vertices balancing between task manager.
In our use case, we set the default parallelism to the number of task
managers :
val stream: DataStream[Array[Byte]] = env.addSource(new
Flink
ean shutdown you can end up with
> duplicated messages after a crash and there is no way around this with
> Kafka 0.9.
>
> Piotrek
>
> On Oct 2, 2017, at 5:30 PM, Antoine Philippot
> wrote:
>
> Thanks Piotr for your answer, we sadly can't use kafka 0.11 for now
d support Kafka 0.9), but it is not currently being actively
> developed.
>
> Piotr Nowojski
>
> On Oct 2, 2017, at 3:35 PM, Antoine Philippot
> wrote:
>
> Hi,
>
> I'm working on a flink streaming app with a kafka09 to kafka09 use case
> which handles around 100
Hi,
I'm working on a flink streaming app with a kafka09 to kafka09 use case
which handles around 100k messages per seconds.
To upgrade our application we used to run a flink cancel with savepoint
command followed by a flink run with the previous saved savepoint and the
new application fat jar as