Thank you very much, Erik.
Yes, the SimpleConsumer definitely would achieve the goal. It was easier with
ImportZkOffset, as it only takes a couple of lines of commands. If no
equivalent of ImportZkOffset, I will go with the SimpleConsumer.
As for the data loss, it took place in the downstream pr
It is possible to commit offsets using the SimpleConsumer API to kafka or
zookeeper for any GroupID, topic, and partition tuple. There are some
difficulties with the SimpleConsumer, but it should be able to make the
call within your app. See the scala Doc here:
http://apache.mirrorcatalogs.com/ka
Hi,
We have a consumer that under certain circumstances may lose data. To guard
against such data loss, we have a tool that periodically pulls and stores
offsets from zk. Once a data loss takes place, we use our historical offsets to
reset the consumer offset on zk.
With offset.storage=zookeepe