Hi pushkar,

I tried with configuring  "message.send.max.retries" to 10. Default value
for this is 3.

But still facing data loss.


On Wed, Dec 18, 2013 at 12:44 PM, pushkar priyadarshi <
priyadarshi.push...@gmail.com> wrote:

> You can try setting a higher value for "message.send.max.retries" in
> producer config.
>
> Regards,
> Pushkar
>
>
> On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal <
> hanish.bansal.agar...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
> > Replication Factor: 2
> > Number of partitions: 2
> >
> >
> > I have configured request.required.acks in producer configuration to -1.
> >
> > As mentioned in documentation
> > http://kafka.apache.org/documentation.html#producerconfigs, setting this
> > value to -1 provides guarantee that no messages will be lost.
> >
> > I am getting below behaviour:
> >
> > If kafka is running as foreground process and i am shutting down the
> kafka
> > leader node using "ctrl+C" then no data is lost.
> >
> > But if i abnormally terminate the kafka using "kill -9 <pid>" then still
> > facing data loss even after configuring request.required.acks to -1.
> >
> > Any suggestions?
> > --
> > *Thanks & Regards*
> > *Hanish Bansal*
> >
>



-- 
*Thanks & Regards*
*Hanish Bansal*

Reply via email to