Thanks Till. I will take a look at your pointers. Mans
On Monday, August 1, 2016 6:27 AM, Till Rohrmann
wrote:
Hi Mans,
Milind is right that in general external systems have to play along if you want
to achieve exactly once processing guarantees while writing to these systems.
Eithe
Hi Mans,
Milind is right that in general external systems have to play along if you
want to achieve exactly once processing guarantees while writing to these
systems. Either by supporting idempotent operations or by allowing to roll
back their state.
In the batch world, this usually means to over
Flink operates in conjunction with sources and sinks. So ,yes, there are
things that an underlying sink (or a source) must support in conjunction
with Flink to enable a particular semantic.
On Jul 30, 2016 11:46 AM, "M Singh" wrote:
> Thanks Konstantin.
>
> Just to clarify - unless the target
Thanks Konstantin.
Just to clarify - unless the target database is resilient to duplicates,
Flink's once-only configuration will not avoid duplicate updates.
Mans
On Saturday, July 30, 2016 7:40 AM, Konstantin Knauf
wrote:
Hi Mans,
depending on the number of operations and the particu
Hi Mans,
depending on the number of operations and the particular database, you
might be able to use transactions.
Maybe you can also find a data model, which is more resilient to these
kind of failures.
Cheers,
Konstantin
On 29.07.2016 19:26, M Singh wrote:
> Hi:
>
> I have a use case where
Hi:
I have a use case where we need to update a counter in a db and for this need
to guarantee once only processing. If we have some entries in a batch and it
partially updates the counters and then fails, if Flink retries the processing
for that batch, some of the counters will be updated twic