Hello Aruna,

if the duplication you are referring to is the duplication of the
events/records that arrive and are consumed to/from Kafka, exactly-once
semantics and transactions are what you are looking for.
Kafka is prepared (since version 0.11 IIRC) to support exactly once, it
means that events are going to be sent by the Producer to the Broker just
once, and will be consumed/processed by the Consumer just when they are
ready to be consumed, once.
In order to guaranty that the records are going to be produced and consumed
just once, both, Producer and Consumer need to be prepared/configured
correctly.

Take a look at the documentation about that, also, this posts explains
nicely how it works:
https://www.confluent.io/blog/transactions-apache-kafka/
https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

Hope that helps, cheers!
--
Jonathan




On Tue, Jul 16, 2019 at 5:46 AM aruna ramachandran <arunaeie...@gmail.com>
wrote:

> I want to process a single message at a time to avoid duplication.
>
>
> On Mon, Jul 15, 2019 at 9:45 PM Pere Urbón Bayes <pere.ur...@gmail.com>
> wrote:
>
> > The good question, Aruna, is why would you like to do that?
> >
> > -- Pere
> >
> > Missatge de aruna ramachandran <arunaeie...@gmail.com> del dia dl., 15
> de
> > jul. 2019 a les 14:17:
> >
> > > Is there a way to get the Consumer to only read one message at a time
> and
> > > commit the offset after processing a single message.
> > >
> >
> >
> > --
> > Pere Urbon-Bayes
> > Software Architect
> > http://www.purbon.com
> > https://twitter.com/purbon
> > https://www.linkedin.com/in/purbon/
> >
>


-- 
Santilli Jonathan

Reply via email to