Least restrictive settings in Kafka to achieve at least once delivery

2019-10-08 Thread Isuru Boyagane
Hi All, We are implementing a use case that needs tight at least once delivery. Even in the case of failures of nodes, no messages must be lost. I am trying to find out the least restrictive configurations that can give us at least once delivery. What I found was, - By setting the

Re: Achiving at least once Delivery on top of Kafka

2019-09-30 Thread Isuru Boyagane
Hi, This is not the complete requirements set we need. Sorry for any inconvenience occurred. Thank you. Regards. On Sun, 29 Sep 2019 at 18:25, Isuru Boyagane wrote: > Hi, > > We are implementing a use case that needs tight at least once delivery. > Even in the case of failures

Achiving at least once Delivery on top of Kafka

2019-09-29 Thread Isuru Boyagane
Hi, We are implementing a use case that needs tight at least once delivery. Even in the case of failures of nodes, no messages must be lost. We are trying to find out the least restrictive configurations that can give us at least once delivery. Following is what we found. - If we use

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-19 Thread Avi Flax
On Thu, Feb 18, 2016 at 8:03 PM, Jason Gustafson wrote: > The consumer is single-threaded, so we only trigger commits in the call to > poll(). As long as you consume all the records returned from each poll > call, the committed offset will never get ahead of the consumed offset, and > you'll have

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jay Kreps
s wrote: > > The default semantics of the new consumer with auto commit are > > at-least-once-delivery. Basically during the poll() call the commit will > be > > triggered and will commit the offset for the messages consumed during the > > previous poll call. This i

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jason Gustafson
e new consumer with auto commit are > > at-least-once-delivery. Basically during the poll() call the commit will > be > > triggered and will commit the offset for the messages consumed during the > > previous poll call. This is an advantage over the older scala consumer >

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote: > The default semantics of the new consumer with auto commit are > at-least-once-delivery. Basically during the poll() call the commit will be > triggered and will commit the offset for the messages consumed during the > previous pol

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote: > The default semantics of the new consumer with auto commit are > at-least-once-delivery. Basically during the poll() call the commit will be > triggered and will commit the offset for the messages consumed during the > previous pol

Re: Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Jay Kreps
The default semantics of the new consumer with auto commit are at-least-once-delivery. Basically during the poll() call the commit will be triggered and will commit the offset for the messages consumed during the previous poll call. This is an advantage over the older scala consumer where the

Kafka Streams: Possible to achieve at-least-once delivery with Streams?

2016-02-18 Thread Avi Flax
concepts. >From reading the docs on the new consumer API, I have the impression that letting the consumer auto-commit is roughly akin to at-most-once delivery, because a commit could occur past a record that wasn’t actually processed. So in order to achieve at-least-once delivery, one needs to emp

Re: at-least-once delivery

2016-02-02 Thread Gwen Shapira
> > > > Also, you can avoid the message reordering issue in that description by > > setting max.in.flight.requests.per.connector to 1. > > > > This slide deck has good guidelines on the types of things you are > talking > > about: > > > > > ht

Re: at-least-once delivery

2016-02-02 Thread Franco Giacosa
; > Also, you can avoid the message reordering issue in that description by > setting max.in.flight.requests.per.connector to 1. > > This slide deck has good guidelines on the types of things you are talking > about: > > http://www.slideshare.net/JiangjieQin/no-data-loss-pipeline-with-apache-kafka-4975

Re: at-least-once delivery

2016-01-30 Thread James Cheng
53844 -James > 2016-01-30 13:18 GMT+01:00 Franco Giacosa : > >> Hi, >> >> The at-least-once delivery is generated in part by the network fails and >> the retries (that may generate duplicates) right? >> >> In the event of a duplicated (there was

Re: at-least-once delivery

2016-01-30 Thread Franco Giacosa
ially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first." 2016-01-30 13:18 GMT+01:00 Franco Giacosa : > Hi, > > The at-least-once delivery is gener

at-least-once delivery

2016-01-30 Thread Franco Giacosa
Hi, The at-least-once delivery is generated in part by the network fails and the retries (that may generate duplicates) right? In the event of a duplicated (there was an error but the first message landed ok on the partition P1) the producer will recalculate the partition on the retry? is this