Flink operates in conjunction with sources and sinks. So ,yes, there are
things that an underlying sink  (or a source) must support in conjunction
with   Flink to enable a particular semantic.
On Jul 30, 2016 11:46 AM, "M Singh" <mans2si...@yahoo.com> wrote:

> Thanks Konstantin.
>
> Just to clarify - unless the target database is resilient to duplicates,
> Flink's once-only configuration will not avoid duplicate updates.
>
> Mans
>
>
> On Saturday, July 30, 2016 7:40 AM, Konstantin Knauf <
> konstantin.kn...@tngtech.com> wrote:
>
>
> Hi Mans,
>
> depending on the number of operations and the particular database, you
> might be able to use transactions.
>
> Maybe you can also find a data model, which is more resilient to these
> kind of failures.
>
> Cheers,
>
> Konstantin
>
> On 29.07.2016 19:26, M Singh wrote:
> > Hi:
> >
> > I have a use case where we need to update a counter in a db and for this
> > need to guarantee once only processing.  If we have some entries in a
> > batch and it partially updates the counters and then fails, if Flink
> > retries the processing for that batch, some of the counters will be
> > updated twice (the ones which succeeded in the first batch).
> >
> > I think in order to guarantee once only processing, I will have to set
> > the buffer size to zero (ie, send one item at a time).
> >
> > Is there any alternative configuration or suggestion on how I can
> > achieve once only updates using a batch mode with partial failures ?
> >
> > Thanks
> >
> > Mans
>
> >
>
> --
> Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
> TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
> Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
> Sitz: Unterföhring * Amtsgericht München * HRB 135082
>
>
>
>

Reply via email to