Thanks Guozhang, Yes we had set to 'largest' and changing it to 'smallest'
resolved the issue. So it was due to the jira
https://issues.apache.org/jira/browse/KAFKA-1006

Thanks,
Raja.


On Tue, Sep 10, 2013 at 1:18 PM, Guozhang Wang <wangg...@gmail.com> wrote:

> Oh got it. Did you set auto.offset.reset = smallest or largest? If it is
> largest it could be due to this bug:
>
> https://issues.apache.org/jira/browse/KAFKA-1006
>
> Guozhang
>
>
>
> On Tue, Sep 10, 2013 at 10:09 AM, Rajasekar Elango
> <rela...@salesforce.com>wrote:
>
> > Hi Guozhang ,
> >
> > 1) When I say "I send messages to new topic" -> yes I am sending new
> > messages to source cluster via console producer.
> > 2) The log message "Handling 0 events" doesn't output topic name. But I
> > would believe its for both old and new topics, because no other app is
> > sending messages to source cluster other than me trying to test using
> > console producer.
> >
> > Thanks,
> > Raja.
> >
> >
> > On Tue, Sep 10, 2013 at 1:03 PM, Guozhang Wang <wangg...@gmail.com>
> wrote:
> >
> > > Hi Raja,
> > >
> > > When you say "I send messages to new topic" I guess you mean that you
> > send
> > > messages to the source cluster right? It may be due to the fact that
> > > producers of mirror make have not catched up with the mirror maker
> > > consumer.
> > >
> > > When you say "I always see Handling 0 events" do you mean that you see
> > this
> > > for both messages for the new topic and for the old topics, or it only
> > > shows this log for new topic?
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Sep 10, 2013 at 7:47 AM, Rajasekar Elango <
> > rela...@salesforce.com
> > > >wrote:
> > >
> > > > Thanks Guozhang,
> > > >
> > > > 1, 2, 3 all are true. We are using default value 200 for
> > > batch.num.messages
> > > > and 5000ms queue.buffering.max.ms. I believe it should batch either
> if
> > > > batch.num.messages is reached or queue.buffering.max.ms is reached.
> > > >
> > > > I see log message "5000ms elapsed , Queue time reached. Sending.  "
>  on
> > > > regular interval. But when I send messages to new topic, I always see
> > > > "Handling 0 events" and it doesn't produce to target cluster. But
> when
> > I
> > > > resend it second time, I see "Handling x events" and starts
> producing.
> > > Any
> > > > clues on how to debug further?
> > > >
> > > > Thanks,
> > > >
> > > > Raja.
> > > >
> > > >
> > > > On Mon, Sep 9, 2013 at 6:02 PM, Guozhang Wang <wangg...@gmail.com>
> > > wrote:
> > > >
> > > > > Hi Raja,
> > > > >
> > > > > So just to summarize the scenario:
> > > > >
> > > > > 1) The consumer of mirror maker is successfully consuming all
> > > partitions
> > > > of
> > > > > the newly created topic.
> > > > > 2) The producer of mirror maker is not producing the new messages
> > > > > immediately when the topic is created (observed from
> > > ProducerSendThread's
> > > > > log).
> > > > > 3) The producer of mirror maker will start producing the new
> messages
> > > > when
> > > > > more messages are sent to the source cluster.
> > > > >
> > > > > If 1) is true then KAFKA-1030 is excluded, since the consumer
> > > > successfully
> > > > > recognize all the partitions and start consuming.
> > > > >
> > > > > If both 2) and 3) is true, I would wonder if the batch size of the
> > > mirror
> > > > > maker producer is large and hence will not send until enough
> messages
> > > are
> > > > > accumulated at the producer queue.
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Mon, Sep 9, 2013 at 2:36 PM, Rajasekar Elango <
> > > rela...@salesforce.com
> > > > > >wrote:
> > > > >
> > > > > > yes, the data exists in source cluster, but not in target
> cluster.
> > I
> > > > > can't
> > > > > > replicate this problem in dev environment and it happens only in
> > prod
> > > > > > environment. I turned on debug logging, but not able to identify
> >  the
> > > > > > problem. Basically, whenever I send data to new topic, I don't
> see
> > > any
> > > > > log
> > > > > > messages from ProducerSendThread in mirrormaker log so they are
> not
> > > > > > produced to target cluster. If I send more messages to same
> topic,
> > > the
> > > > > > producer send thread kicks off and replicates the messages. But
> > > > whatever
> > > > > > messages send first time gets lost. How can I trouble shoot this
> > > > problem
> > > > > > further? Even this could be due to know issue
> > > > > > https://issues.apache.org/jira/browse/KAFKA-1030, how can I
> > confirm
> > > > > that?
> > > > > > Is there config tweaking I can make to workaround this..?
> > > > > > ConsumerOffsetChecks helps to track consumers. Its there any
> other
> > > tool
> > > > > we
> > > > > > can use to track producers in mirrormaker. ?
> > > > > >
> > > > > > Thanks in advance for help.
> > > > > >
> > > > > > Thanks,
> > > > > > Raja.
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Sep 6, 2013 at 3:50 AM, Swapnil Ghike <
> sgh...@linkedin.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hi Rajasekar,
> > > > > > >
> > > > > > > You said that ConsumerOffsetChecker shows that new topics are
> > > > > > successfully
> > > > > > > consumed and the lag is 0. If that's the case, can you verify
> > that
> > > > > there
> > > > > > > is data on the source cluster for these new topics? If there is
> > no
> > > > data
> > > > > > at
> > > > > > > the source, MirrorMaker will only assign consumer streams to
> the
> > > new
> > > > > > > topic, but the lag will be 0.
> > > > > > >
> > > > > > > This could otherwise be related to
> > > > > > > https://issues.apache.org/jira/browse/KAFKA-1030.
> > > > > > >
> > > > > > > Swapnil
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 9/5/13 8:38 PM, "Guozhang Wang" <wangg...@gmail.com> wrote:
> > > > > > >
> > > > > > > >Could you let me know the process of reproducing this issue?
> > > > > > > >
> > > > > > > >Guozhang
> > > > > > > >
> > > > > > > >
> > > > > > > >On Thu, Sep 5, 2013 at 5:04 PM, Rajasekar Elango
> > > > > > > ><rela...@salesforce.com>wrote:
> > > > > > > >
> > > > > > > >> Yes guozhang
> > > > > > > >>
> > > > > > > >> Sent from my iPhone
> > > > > > > >>
> > > > > > > >> On Sep 5, 2013, at 7:53 PM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > > > wrote:
> > > > > > > >>
> > > > > > > >> > Hi Rajasekar,
> > > > > > > >> >
> > > > > > > >> > Is auto.create.topics.enable set to true in your target
> > > cluster?
> > > > > > > >> >
> > > > > > > >> > Guozhang
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > On Thu, Sep 5, 2013 at 4:39 PM, Rajasekar Elango
> > > > > > > >><rela...@salesforce.com
> > > > > > > >> >wrote:
> > > > > > > >> >
> > > > > > > >> >> We having issues that mirormaker not longer replicate
> newly
> > > > > created
> > > > > > > >> topics.
> > > > > > > >> >> It continues to replicate data for existing topics and
> but
> > > new
> > > > > > topics
> > > > > > > >> >> doesn't get created on target cluster.
> > ConsumerOffsetTracker
> > > > > shows
> > > > > > > >>that
> > > > > > > >> new
> > > > > > > >> >> topics are successfully consumed and Lag is 0. But those
> > > topics
> > > > > > > >>doesn't
> > > > > > > >> get
> > > > > > > >> >> created in target cluster. I also don't see mbeans for
> this
> > > new
> > > > > > topic
> > > > > > > >> under
> > > > > > > >> >> kafka.producer.ProducerTopicMetrics.<topic name>metric.
> In
> > > > logs I
> > > > > > see
> > > > > > > >> >> warning for NotLeaderForPatition. but don't see major
> > error.
> > > > What
> > > > > > > >>else
> > > > > > > >> can
> > > > > > > >> >> we look to troubleshoot this further.
> > > > > > > >> >>
> > > > > > > >> >> --
> > > > > > > >> >> Thanks,
> > > > > > > >> >> Raja.
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > --
> > > > > > > >> > -- Guozhang
> > > > > > > >>
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >--
> > > > > > > >-- Guozhang
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Raja.
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Raja.
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> >
> > --
> > Thanks,
> > Raja.
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thanks,
Raja.

Reply via email to