Could you try starting from scratch again? The recent fix that we had may
not be able to recover a cluster already in an inconsistent state.

Thanks,

Jun


On Thu, Dec 12, 2013 at 8:45 PM, David Birdsong <david.birds...@gmail.com>wrote:

> I was running a 2-node kafka cluster off github trunnk at:
> eedbea6526986783257ad0e025c451a8ee3d9095
>
> ...for a few weeks with no issues. I recently downloaded the 0.8 stable
> version, configured/started two new brokers with 0.8.
>
> I successfully reassigned all but 1 partition from the older pair to the
> newer pair, but have 1 partition seemingly stuck on an the old leader. The
> replicas, ISR, and leader are all the same--no extra nodes are replicating
> this last partition--this was true before any changes.
>
> I came across this thread:
>
> http://mail-archives.apache.org/mod_mbox/kafka-users/201312.mbox/%3ccacnty1ddbjse1bxrj1ertrxi+zbz3wawyvjdevvjpootnyo...@mail.gmail.com%3E
>
> ..and unlike the poster, I'm free to play fast and loose, so I built off of
> trunk at: dd58d753ce3ffb41776a6fa6322cb822f2222500
>
> I first upgraded one of the desired target ISR's and after a few minutes
> upgraded the existing leader and bounced it, temporarily breaking that
> partition--no luck.
>
> I'm at a loss as to how to recover this partition's data; short of the data
> being recovered, how to even regain use of the partition. The data's not
> critical, this was just an exercise in gaining operation familiarity w/
> kafka.
>
> I can't find any docs on how to get out of this situation.
>

Reply via email to