I have also asked this question before, and others too, and I'm including a nice details response form Grant below. My only other wish is that it would be possible to forcefully re-set the offsets to zero when needed.... Even though it is unlikely to exhaust the whole range of values - when they become 8-9-10-... digit numbers (when processing many millions of events per day) it is hard to quickly visualize them without counting the digits :) I would prefer to be able to reset the offsets to 0 every month or so instead. >>>>>>>>>>>>>>> - To - - users@kafka.apache.org -
Message body I can't be sure of how every client will handle it, it is probably not likely, and there could potentially be unforeseen issues. That said, given that offsets are stored in a (signed) Long. I would suspect that it would rollover to negative values and increment from there. That means instead of 9,223,372,036,854,775,807 potential offset values, you actually have 18,446,744,073,709,551,614 potential values. To put that into perspective if we assign 1 byte to each offset thats just over 18 Exabytes. You will likely run into many more issues other than offset rollover, before you are able to retain 18 Exabytes in single Kafka topic. (And if not, I would evaluate breaking up your topic into multiple smaller ones). Thanks, Grant > > On Thu, Oct 1, 2015 at 4:22 AM, Chad Lung <chad.l...@gmail.com> wrote: > > > I seen a previous question (http://search-hadoop.com/m/uyzND1lrGUW1PgKGG > ) > > on offset rollovers but it doesn't look like it was ever answered. > > > > Does anyone one know what happens when an offset max limit is reached? > > Overflow, or something else? > > > > Thanks, > > > > Chad > > > From: Joe San <codeintheo...@gmail.com> To: users@kafka.apache.org Sent: Friday, February 5, 2016 12:17 AM Subject: Maximum Offset What is the maximum Offset? I guess it is tied to the data type? Long.MAX_VALUE? What happens after that? Is the commit log reset automatically after it hits the maximum value?