It is possible to commit offsets using the SimpleConsumer API to kafka or
zookeeper for any GroupID, topic, and partition tuple.  There are some
difficulties with the SimpleConsumer, but it should be able to make the
call within your app.  See the scala Doc here:
http://apache.mirrorcatalogs.com/kafka/0.8.2-beta/scala-doc/index.html#kafk
a.javaapi.consumer.SimpleConsumer And look for the commitOffsets function.
 

I am curious, in what situations are there data loss?
-Erik  


On 9/9/15, 4:17 PM, "Ye Hong" <ye.h...@audiencescience.com> wrote:

>Hi,
>
>We have a consumer that under certain circumstances may lose data. To
>guard against such data loss, we have a tool that periodically pulls and
>stores offsets from zk. Once a data loss takes place, we use our
>historical offsets to reset the consumer offset on zk.
>With offset.storage=zookeeper, the tool just simply calls
>kafka-run-class.sh kafka.tools.ExportZkOffsets/ImportZkOffsets. However,
>after moving to offset.storage=kafka, we can no longer call
>ExportZkOffsets/ImportZkOffsets.
>For offset export, I suppose we can call the REST API of Burrow to get
>the same results. However, I couldn't find an easy way to reset offsets
>that¹s comparable to ImportZkOffsets. Could someone shed some lights on
>what we should do?
>
>Thanks!

Reply via email to