This does include the patch of 1112, which means the issue is not fixed.

Drew, could you comment on KAFKA-1112 about how you can reproduce this
issue so we can re-open it?

Guozhang


On Mon, Dec 23, 2013 at 11:03 AM, Drew Goya <d...@gradientx.com> wrote:

> I'm not sure if I had hard killed the broker but I do have the fix for that
> case.
>
> I currently have this commit deployed:
>
> commit 87efda7f818218e0868be7032c73c994d75931fd
> Author: Guozhang Wang <guw...@linkedin.com>
> Date:   Fri Nov 22 09:16:39 2013 -0800
>
>     kafka-1103; Consumer uses two zkclients; patched by Guozhang Wang;
> reviewed by Joel Koshy and Jun Rao
>
>
> On Mon, Dec 23, 2013 at 9:59 AM, Jun Rao <jun...@gmail.com> wrote:
>
> > Did you hard kill the broker?  If so, do you have the fix for KAFKA-1112?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Fri, Dec 20, 2013 at 4:05 PM, Drew Goya <d...@gradientx.com> wrote:
> >
> > > This is the exception I ran into, I was able to fix it by deleting the
> > > /data/kafka/logs/Events2-124/ directory.  That directory contained a
> non
> > > zero size index file and a zero size log file.  I had a bunch of these
> > > directories scattered around the cluster.
> > >
> > > [2013-12-18 02:40:37,163] FATAL Fatal error during KafkaServerStable
> > > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> > > java.lang.IllegalArgumentException: requirement failed: Corrupt index
> > > found, index file
> > (/data/kafka/logs/Events2-124/00000000000000000000.index)
> > > has non-zero size but the last offset is 0 and the base offset is 0
> > > at scala.Predef$.require(Predef.scala:145)
> > > at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160)
> > > at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159)
> > > at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> > > at
> > >
> > >
> >
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
> > > at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> > > at
> > >
> > >
> >
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
> > > at kafka.log.Log.loadSegments(Log.scala:159)
> > > at kafka.log.Log.<init>(Log.scala:64)
> > > at
> > >
> > >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:120)
> > > at
> > >
> > >
> >
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:115)
> > > at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> > > at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> > > at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:115)
> > > at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:107)
> > > at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> > > at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> > > at kafka.log.LogManager.loadLogs(LogManager.scala:107)
> > > at kafka.log.LogManager.<init>(LogManager.scala:59)
> > >
> > >
> > > On Fri, Dec 20, 2013 at 9:06 AM, Jun Rao <jun...@gmail.com> wrote:
> > >
> > > > Drew,
> > > >
> > > > Even without kafka-1074, brokers shouldn't hit exceptions during
> > startup.
> > > > What exceptions do you see?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Dec 19, 2013 at 9:36 PM, Drew Goya <d...@gradientx.com>
> wrote:
> > > >
> > > > > We migrated from 0.8.0 to 0.8.1 last week.  We have a 15 broker
> > cluster
> > > > so
> > > > > it took a while to roll through them one by one.  Once I finished I
> > was
> > > > > finally able to complete a partition reassignment.  I also had to
> do
> > > some
> > > > > manual cleanup, but Neha says it will be fixed soon:
> > > > >
> > > > > https://issues.apache.org/jira/browse/KAFKA-1074
> > > > >
> > > > > Until then, if you have done any partition reassignment you will
> have
> > > to
> > > > > watch your brokers as they come up.  They may fail and you will
> have
> > to
> > > > go
> > > > > delete the empty partition directories.
> > > > >
> > > > >
> > > > > On Thu, Dec 19, 2013 at 11:07 AM, Guozhang Wang <
> wangg...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > 0.8.1 is working in stable at LinkedIn now.
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Thu, Dec 19, 2013 at 10:52 AM, Yu, Libo <libo...@citi.com>
> > wrote:
> > > > > >
> > > > > > > I also want to know how stable the 0.81 will be, compared with
> > 0.8
> > > or
> > > > > > > 0.8-beta1.
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > > Libo
> > > > > > >
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jason Rosenberg [mailto:j...@squareup.com]
> > > > > > > Sent: Thursday, December 19, 2013 12:54 PM
> > > > > > > To: users@kafka.apache.org
> > > > > > > Subject: Re: upgrade from beta1 to 0.81
> > > > > > >
> > > > > > > How stable is 0.8.1, will there be a 'release' of this soon, or
> > are
> > > > > there
> > > > > > > still significant open issues?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jason
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Dec 19, 2013 at 12:17 PM, Guozhang Wang <
> > > wangg...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Libo, yes the upgrade from 0.8 to 0.8.1 can be done in place.
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Dec 19, 2013 at 8:57 AM, Yu, Libo <libo...@citi.com>
> > > > wrote:
> > > > > > > >
> > > > > > > > > Hi folks,
> > > > > > > > >
> > > > > > > > > As the tools in 0.8 are not stable and we don't want to
> take
> > > the
> > > > > > > > > risk. we want to skip 0.8 and upgrade from beta1 to 0.81
> > > > directly.
> > > > > > > > > So my question is whether we can do an in place upgrade and
> > let
> > > > > 0.81
> > > > > > > > > use beta1's zk and kf data.
> > > > > > > > > Assume
> > > > > > > > > that we will disable log compaction. Thanks.
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > >
> > > > > > > > > Libo
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > -- Guozhang
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
-- Guozhang

Reply via email to