1 additional information we found in the kafka’s application log since the
MAGIC time:

2014-12-04 09:59:36,726 [kafka-scheduler-2] INFO
kafka.cluster.Partition  - Partition [a.s.3,26] on broker 5: Shrinking
ISR for partition [a.s.3,26] from 5,4 to 5
2014-12-04 09:59:36,728 [kafka-scheduler-2] ERROR kafka.utils.ZkUtils$
 - Conditional update of path
/brokers/topics/a.s.3/partitions/26/state with data
{"controller_epoch":2,"leader":5,"version":1,"leader_epoch":4,"isr":[5]}
and expected version 675 failed due to
org.apache.zookeeper.KeeperException$BadVersionException:
KeeperErrorCode = BadVersion for
/brokers/topics/a.s.3/partitions/26/state

​

On Mon, Dec 8, 2014 at 6:59 PM, Helin Xiang <xkee...@gmail.com> wrote:

> Hi,
>
> We have currently upgraded our kafka cluster from 0.7.2 to 0.8.1.1.
>
> In one of our application, we want to get all partitions' latest offsets,
> so we use getoffsetbefore java API (latest).
>
> We believe at some time, 1 of the partition's latest offset we got is much
> smaller than its real latest offset,(we saw in the application's log that
> the partition's offset is much smaller than other partitions'). Since the
> real data file of that partition was already deleted, we cannot prove our
> guess, but we found some clue in the kafka's application log which helps us
> to conclude that the partition's latest offset at that moment did have a
> much larger number.
>
> some additional useful information: the partition have 1 additional
> replica(follower), and at that time, it was not synced with the leader
> partition(far away behind the leader).
>
> Does any one have the same issue? In what condition could lead to this
> situation?
>
> Thanks.
>
> --
>
>
> *Best Regards向河林*
>



-- 


*Best Regards向河林*

Reply via email to