If you are using ZookeeperConsumerConnector, you can delete your
consumers state from zookeeper (/consumers/) and
then restart the consumer with the auto.offset.reset option set to
"largest"
Thanks,
Neha
On Mon, Apr 15, 2013 at 3:33 PM, Alex Zuzin wrote:
> Hi all,
>
> an 0.8 n00b question: how
Hi Jamie,
Here are the steps and guidelines for contributing to Kafka -
http://kafka.apache.org/contributing.html
Please let us know how we can improve those.
Thanks,
Neha
On Mon, Apr 15, 2013 at 4:51 PM, Jamie Wang wrote:
> Does anyone know what is the process for accepting code contribution
Does anyone know what is the process for accepting code contribution such as
bug fixes, etc? I understand if there's bug and we need to fix, we must
publish the code that fixed the bug which is totally fine. But just want to
know the exact process and requirement as our legal is asking. Thanks
Hi all,
an 0.8 n00b question: how does one start consuming at the current end of the
stream?
In other words, how does a consumer wind the entire topic through upon
connection?
Thank you,
--
"If you can't conceal it well, expose it with all your might"
Alex Zuzin
Philip,
We would not use spooling to local disk on the producer to deal with
problems with the connection to the brokers, but rather to absorb temporary
spikes in traffic that would overwhelm the brokers. This is assuming that
1) those spikes are relatively short, but when they come they require m
Yes, it is.
Thanks,
Jun
On Mon, Apr 15, 2013 at 6:03 AM, Yonghui Zhao wrote:
> Hi,
>
> I want to confirm is private kafka.javaapi.producer.Producer thread safe?
> i.e. I can use one producer to send data in multi threads at the same
> time.
>
Hi,
I want to confirm is private kafka.javaapi.producer.Producer thread safe?
i.e. I can use one producer to send data in multi threads at the same time.