This problem is caused by that the consumer  offset lag producer offset,
and the messages are deleted, then i use the lagged offset of fetch
messages.But those messages have been deleted.

Can this be resoled with using `auto.offset.reset=largest` ?

2015-12-24 16:32 GMT+08:00 Fredo Lee <buaatianwa...@gmail.com>:

> When using kafka-0.8.2.0, Some questions happaned to me.
>
> I created one topic called `test` with partitions 60, replicaiton-factors
> 2 and set log.retention.hours to 24 , Then i send some messages to `test`.
> some days later, i create a consumer for this topic. but i got `out of
> range` (i store my offset on kafka)
>
> So my questions is:
>
> 1. the `__consumer_offset` topic is just normal topic like other? Is
> its log.retention.hours same to other topics?
> 2. If I set log.retention.hours to 24 hours and store some  some offsets
> in  `_consumer_offset_32/00000000000000.log`, which offset would i get
> after 24 hours.
>
> for the reason of topic deletion, will i get 0 offset?
> I doubt that my `out of range exception` is caused by this i described
> above.
>
> My english is so pool. hope people can understand.
>
>
> WanliTian
>

Reply via email to