Yes i have made it to trace as it will help me debug the things.
Have u found any issue in the it.
On Feb 13, 2014 9:12 PM, "Jun Rao" <jun...@gmail.com> wrote:

> The request log is in trace. Take a look at the log4j property file in
> config/.
>
> Thanks,
>
> Jun
>
>
> On Wed, Feb 12, 2014 at 9:45 PM, Arjun <ar...@socialtwist.com> wrote:
>
> > I am sorry but could not locate the offset in the request log. I have
> > turned on the debug for the logs but couldnt . Do you know any pattern
> with
> > which i can look in.
> >
> > Thanks
> > Arjun Narasimha Kota
> >
> >
> > On Wednesday 12 February 2014 09:26 PM, Jun Rao wrote:
> >
> >> Interesting. So you have 4 messages in the broker. The checkpointed
> offset
> >> for the consumer is at the 3rd message. Did you change the default
> setting
> >> of auto.commit.enable? Also, if you look at the
> >> request log, what's the offset in the fetch request from this consumer?
> >> Thanks,
> >> Jun
> >>
> >>
> >> On Tue, Feb 11, 2014 at 10:07 PM, Arjun <ar...@socialtwist.com> wrote:
> >>
> >>  The topic name is correct, the o/p of the ConsumerOffserChecker is
> >>>
> >>> arjunn@arjunn-lt:~/Downloads/Kafka0.8/new/kafka_2.8.0-0.8.0$
> >>> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
> >>> --zkconnect 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183 --topic
> >>> taf.referral.emails.service
> >>> Group           Topic                          Pid Offset logSize
> >>> Lag             Owner
> >>> group1          taf.referral.emails.service    0   2 4               2
> >>> group1_arjunn-lt-1392133080519-e24b249b-0
> >>> group1          taf.referral.emails.service    1   2 4               2
> >>> group1_arjunn-lt-1392133080519-e24b249b-0
> >>>
> >>> thanks
> >>> Arjun Narasimha Kota
> >>>
> >>>
> >>>
> >>>
> >>> On Wednesday 12 February 2014 10:21 AM, Jun Rao wrote:
> >>>
> >>>  Could you double check that you used the correct topic name? If so,
> >>>> could
> >>>> you run ConsumerOffsetChecker as described in
> >>>> https://cwiki.apache.org/confluence/display/KAFKA/FAQ and see if
> there
> >>>> is
> >>>> any lag?
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Jun
> >>>>
> >>>>
> >>>> On Tue, Feb 11, 2014 at 8:45 AM, Arjun Kota <ar...@socialtwist.com>
> >>>> wrote:
> >>>>
> >>>>   fetch.wait.max.ms=10000
> >>>>
> >>>>> fetch.min.bytes=128
> >>>>>
> >>>>> My message size is much more than that.
> >>>>> On Feb 11, 2014 9:21 PM, "Jun Rao" <jun...@gmail.com> wrote:
> >>>>>
> >>>>>   What's the fetch.wait.max.ms and fetch.min.bytes you used?
> >>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Jun
> >>>>>>
> >>>>>>
> >>>>>> On Tue, Feb 11, 2014 at 12:54 AM, Arjun <ar...@socialtwist.com>
> >>>>>> wrote:
> >>>>>>
> >>>>>>   With the same group id from the console consumer its working fine.
> >>>>>>
> >>>>>>>
> >>>>>>> On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
> >>>>>>>
> >>>>>>>   Arjun,
> >>>>>>>
> >>>>>>>> Are you using the same group name for the console consumer and the
> >>>>>>>>
> >>>>>>>>  java
> >>>>>>>
> >>>>>> consumer?
> >>>>>>
> >>>>>>> Guozhang
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Mon, Feb 10, 2014 at 11:38 PM, Arjun <ar...@socialtwist.com>
> >>>>>>>>
> >>>>>>>>  wrote:
> >>>>>>>
> >>>>>>    Hi Jun,
> >>>>>>
> >>>>>>> No its not that problem. I am not getting what the problem is can
> you
> >>>>>>>>> please help.
> >>>>>>>>>
> >>>>>>>>> thanks
> >>>>>>>>> Arjun Narasimha Kota
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Monday 10 February 2014 09:10 PM, Jun Rao wrote:
> >>>>>>>>>
> >>>>>>>>>    Does
> >>>>>>>>>
> >>>>>>>>>  https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-
> >>>>>>>>>> Whydoesmyconsumernevergetanydata?
> >>>>>>>>>> apply?
> >>>>>>>>>>
> >>>>>>>>>> Thanks,
> >>>>>>>>>>
> >>>>>>>>>> Jun
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Sun, Feb 9, 2014 at 10:27 PM, Arjun <ar...@socialtwist.com>
> >>>>>>>>>>
> >>>>>>>>>>  wrote:
> >>>>>>>>>
> >>>>>>>>     Hi,
> >>>>>>
> >>>>>>>   I started using kafka some time back. I was experimenting with
> 0.8.
> >>>>>>>>>> My
> >>>>>>>>>>
> >>>>>>>>> problem is the kafka is unable to consume the messages. My
> >>>>>>>
> >>>>>>>> configuration
> >>>>>>>>>>> is kafka broker on the local host and zookeeper on the local
> >>>>>>>>>>> host.
> >>>>>>>>>>>
> >>>>>>>>>>>  I
> >>>>>>>>>>
> >>>>>>>>> have only one broker and one consumer at present.
> >>>>>>
> >>>>>>> What have I done:
> >>>>>>>>>>>          1) I used the java examples in the kafka src and
> pushed
> >>>>>>>>>>> some
> >>>>>>>>>>>
> >>>>>>>>>>>  600
> >>>>>>>>>>
> >>>>>>>>> messages to the broker
> >>>>>>>
> >>>>>>>>          2) I used the console consumer to check weather the
> >>>>>>>>>>> messages
> >>>>>>>>>>>
> >>>>>>>>>>>  are
> >>>>>>>>>>
> >>>>>>>>> there in the broker or not. Console consumer printed all 600
> >>>>>>>
> >>>>>>>> messages
> >>>>>>>>>>
> >>>>>>>>>          3) Now i used the java Consumer code, and tried to get
> >>>>>> those
> >>>>>>
> >>>>>>> messages. This is not printing any messages. It just got stuck
> >>>>>>>>>>>
> >>>>>>>>>>> When was it working earlier:
> >>>>>>>>>>>          -When i tried with three brokers and three consumers
> in
> >>>>>>>>>>> the
> >>>>>>>>>>>
> >>>>>>>>>>>  same
> >>>>>>>>>>
> >>>>>>>>> machine, with the same configuration it worked fine.
> >>>>>>>
> >>>>>>>>          -I changed the properties accordingly when i tried to
> make
> >>>>>>>>>>>
> >>>>>>>>>>>  it
> >>>>>>>>>>
> >>>>>>>>> work
> >>>>>>
> >>>>>>> with one broker and one consumer
> >>>>>>>>>>>
> >>>>>>>>>>> What does log say:
> >>>>>>>>>>>          - attaching the logs even
> >>>>>>>>>>>
> >>>>>>>>>>> If some one points me where I am doing wrong it would be
> helpful.
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks
> >>>>>>>>>>> Arjun Narasimha Kota
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >
>

Reply via email to