Why do we think atomicity is expected, if the old API we are emulating here
lacks atomicity?
I don't remember emails to the mailing list saying: "I expected this batch
to be atomic, but instead I got duplicates when retrying after a failed
batch send".
Maybe atomicity isn't as strong requirement as we believe? That is,
everyone expects some duplicates during failure events and handles them
downstream?



On Thu, Apr 30, 2015 at 2:02 PM, Ivan Balashov <ibalas...@gmail.com> wrote:

> 2015-04-30 8:50 GMT+03:00 Ewen Cheslack-Postava <e...@confluent.io>:
>
> > They aren't going to get this anyway (as Jay pointed out) given the
> current
> > broker implementation
> >
>
> Is it also incorrect to assume atomicity even if all messages in the batch
> go to the same partition?
>

Reply via email to