t; It working as intended. You could set the property min.fetch.bytes to a
> > small value to get less messages in each batch. Setting it to zero will
> > probably mean you get one object with each batch, at least was the case
> > when I tried, but I was producing and consuming at t
Hi, I started with a clean install of 0.9 Kafka broker and populated a test
topic with 1 million messages. I then used the console consumer to read
from beginning offset. Using --new-consumer reads the messages, but it
stalls after every x number of messages or so, and then continues again. It
is v
Actually, looking at the code, the consumer client code can also catch this
exception while iterating for messages. The fetcher thread inserts a special
message before dying, which triggers an exception while client calls next
message
https://github.com/apache/kafka/blob/0.7.2/core/src/main/sca
> OffsetRequest?
> Could you give me an example for this API?
>
>
>
> -- Regards
> Sining Ma
>
>
>
>
>
> -Original Message-
> From: Suyog Rao
> To: users
> Sent: Fri, May 24, 2013 1:32 pm
> Subject: Re: OffsetOutOfRangeExc
Since you are using the simple consumer you will need to handle the
OffsetOutOfRange Exception in your code. This happens when your consumer
queries for an offset which is no longer persisted in Kafka (The logs have been
deleted based on the retention policy). Ideally when this happens, the cons
Hi, in Kafka 0.7.2 we are getting QueueFullExceptions while using the
AsyncProducer with queue.size = 50K and 1 producer. I read that we can make
this internal queue blocking by setting queue.enqueue.timeout.ms = -1. Is that
possible in 0.7.2? On the broker side, the log.flush.interval is set to
Is there a way to get the timestamp for a Kafka message entry on the server by
the consumer? What I would like to do is to is check if an offset is of a
particular age before pulling offset + bytes to my consumer. Otherwise I would
like the messages to be continued to be queued in Kafka until th
Hello,
Wanted to check if there is any known limit on the # of topics in a Kafka
cluster? I wanted to design a system which has say 5k topics and
multi-threaded consumers reading messages from these topics. Does anyone
have experience with such large topic size? I see in Kafka's page a test
for th