Yeah, I can do that, but I’d prefer if the first broker didn’t drop out of the
ISR in the first place. Just trying to figure out why it did…
On Feb 21, 2014, at 11:30 PM, Jun Rao wrote:
> So, it sounds like you want the leader to be moved back to the failed
> broker that has caught up. For no
This is a good idea, too. I would modify it to include stream marking, then
you can have:
long end = consumer.lastOffset(tp);
consumer.setMark(end);
while(consumer.beforeMark()) {
process(consumer.pollToMark());
}
or
long end = consumer.lastOffset(tp);
consumer.setMark(end);
for(Object msg
I think what Robert is saying is that we need to think through the offset
API to enable "batch processing" of topic data. Think of a process that
periodically kicks off to compute a data summary or do a data load or
something like that. I think what we need to support this is an api to
fetch the la
Jun,
I was originally thinking a non-blocking read from a distributed stream should
distinguish between "no local messages, but a fetch is occurring” versus “you
have drained the stream”. The reason this may be valuable to me is so I can
write consumers that read all known traffic then termina
The consumer offset checker gives this error
arjunn@PRINHYLTPDL0061:~/Kafka/kafka_2.8.0-0.8.0$ bin/kafka-run-class.sh
kafka.tools.ConsumerOffsetChecker --group group1 --zkconnect
localhost:2181,localhost:2182,localhost:2183 --topic test
Group Topic Pid Offset
Hi please find below the o/p of the list and the console producer. There
are no errors in the state change log or in the controller log.
One thing i found is this one happens only at starting time. I mean the
first time, when i start kafka and try to insert the messages i see this
error.
ar