Hi Jun,

I'm using default configuration (ack=1),
changing it t0 all or 2 will not help, as the producer queue will be
exhausted is any kafka broker goes down for long time.


Thanks.

Regards,
Mazhar Shaikh.


On Wed, Aug 17, 2016 at 8:11 PM, Jun Rao <j...@confluent.io> wrote:

> Are you using acks=1 or acks=all in the producer? Only the latter
> guarantees acked messages won't be lost after leader failure.
>
> Thanks,
>
> Jun
>
> On Wed, Aug 10, 2016 at 11:41 PM, Mazhar Shaikh <
> mazhar.shaikh...@gmail.com>
> wrote:
>
> > Hi Kafka Team,
> >
> > I'm using kafka (kafka_2.11-0.9.0.1) with librdkafka (0.8.1) API for
> > producer
> > During a run of 2hrs, I notice the total number of messaged ack'd by
> > librdkafka delivery report is greater than the maxoffset of a partition
> in
> > kafka broker.
> > I'm running kafka broker with replication factor of 2.
> >
> > Here, message has been lost between librdkafka - kafka broker.
> >
> > As librdkafka is providing success delivery report for all the messages.
> >
> > Looks like kafka broker is dropping the messages after acknowledging
> > librdkafka.
> >
> > Requesting you help in solving this issue.
> >
> > Thank you.
> >
> >
> > Regards
> > Mazhar Shaikh
> >
>

Reply via email to