Hi Gerard,

Why the fetch size should be correlated to the consumer stalling after x
messages.

One can st the fetch size on a cassandra query, and yet there's no
"stalling", it's more or less just another "page".



Cheers,
-- Brice

On Tue, Jan 12, 2016 at 12:10 PM, Gerard Klijs <gerard.kl...@dizzit.com>
wrote:

> Hi Suyog,
> It working as intended. You could set the property min.fetch.bytes to a
> small value to get less messages in each batch. Setting it to zero will
> probably mean you get one object with each batch, at least was the case
> when I tried, but I was producing and consuming at the same time.
>
> On Tue, Jan 12, 2016 at 3:47 AM Suyog Rao <suyog....@gmail.com> wrote:
>
> > Hi, I started with a clean install of 0.9 Kafka broker and populated a
> test
> > topic with 1 million messages. I then used the console consumer to read
> > from beginning offset. Using --new-consumer reads the messages, but it
> > stalls after every x number of messages or so, and then continues again.
> It
> > is very batchy in its behaviour. If I go back to the old consumer, I am
> > able to stream the messages continuously. Am I missing a timeout setting
> or
> > something?
> >
> > I created my own consumer in Java and call poll(0) in a loop, but I still
> > get the same behaviour. This is on Mac OS X (yosemite) with java version
> > "1.8.0_65".
> >
> > Any ideas?
> >
> > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > apache_logs --from-beginning --new-consumer
> >
> > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > apache_logs --from-beginning -zookeeper localhost:2181
> >
>

Reply via email to