Sunil,
Business needs... Anyway, if it was 2, we would face the same problem. For 
example if the partition leader was the last one to be rebooted and then got 
its disk corrupted. The erase would happens the same way.

Regrads,

On 2021/06/17 21:23:40, Sunil Unnithan <sunilu...@gmail.com> wrote: 
> Why isr=all? Why not use min.isr=2 in this case?
> 
> On Thu, Jun 17, 2021 at 5:11 PM Jhanssen Fávaro <jhanssenfav...@gmail.com>
> wrote:
> 
> > Basically, if we have 3 brokers and the ISR == all, and in the case that a
> > leader partition broker was the last server that was restarted/rebooted,
> > and during its startup got a disk corruption, all the followers will mark
> > the topic as offline.
> > So, If the last broker leader that got the corrupted disk starts, It will
> > be back to the partition leaderhip and then erase all the others
> > followers/brokers in the cluster.
> >
> > It should at least "asks" the other 2 brokers if they are not zeroed.
> > Anyway to avoid this data to be truncate in the followers ?
> >
> > Best Regards,
> > Jhanssen
> > On 2021/06/17 20:54:50, Jhanssen F��varo <jhanssenfav...@gmail.com>
> > wrote:
> > > Hi all, we were testing kafka disaster/recover in our Sites.
> > >
> > > Anyway do avoid the scenario in this post ?
> > > https://blog.softwaremill.com/help-kafka-ate-my-data-ae2e5d3e6576
> > >
> > > But, the Unclean Leader exception is not an option in our case.
> > > FYI..
> > > We needed to deactivated our systemctl for kafka brokers to avoid a
> > service startup with a corrupted leader disk.
> > >
> > > Best Regards!
> > >
> > >
> >
> 

Reply via email to