Hey guys, I am dealing with a similar problem and hoping a similar solution can help me out. Looking for some feedback on this problem and potential solution:
So I am reading messages from a topic, then doing some synchronous processing in the thread handling the consumer iterator, THEN issuing an asynchronous write operation to our persistent data store. (it's asynchronous for performance reasons, if we flushed that through on every write it would be very slow.) So the current message offset of the consumer thread won't necessarily correspond to what has been flushed to the persistent data store. I'm thinking I can keep track of the earliest outstanding message offset per partition we've finished flushing, and then in a callback on every flush, also commit that offset for each partition to Zookeeper. Thanks in advance, --Ian On Apr 23, 2014, at 12:01 PM, Seshadri, Balaji <balaji.sesha...@dish.com> wrote: > I'm not seeing that API in java MessageAndMeta,is this part of > ConsumerIterator. > > > -----Original Message----- > From: Jun Rao [mailto:jun...@gmail.com] > Sent: Wednesday, April 23, 2014 8:47 AM > To: users@kafka.apache.org > Subject: Re: commitOffsets by partition 0.8-beta > > The checkpointed offset should be the offset of the next message to be > consumed. So, you should save mAndM.nextOffset(). > > Thanks, > > Jun > > > On Tue, Apr 22, 2014 at 8:57 PM, Seshadri, Balaji > <balaji.sesha...@dish.com>wrote: > >> Yes I disabled it. >> >> My doubt is the path should have offset to be consumed or last >> consumed offset. >> >> -----Original Message----- >> From: Jun Rao [mailto:jun...@gmail.com] >> Sent: Tuesday, April 22, 2014 9:52 PM >> To: users@kafka.apache.org >> Subject: Re: commitOffsets by partition 0.8-beta >> >> Do you have auto commit disabled? >> >> Thanks, >> >> Jun >> >> >> On Tue, Apr 22, 2014 at 7:10 PM, Seshadri, Balaji >> <balaji.sesha...@dish.com>wrote: >> >>> I'm updating the latest offset consumed to the zookeeper directory. >>> >>> Say for eg if my last consumed message has offset of 5 i update it >>> in the path,but when i check zookeeper path it has 6 after sometimes. >>> >>> Does any other process updates it ?. >>> >>> ________________________________________ >>> From: Seshadri, Balaji >>> Sent: Friday, April 18, 2014 11:50 AM >>> To: 'users@kafka.apache.org' >>> Subject: RE: commitOffsets by partition 0.8-beta >>> >>> Thanks Jun. >>> >>> >>> -----Original Message----- >>> From: Jun Rao [mailto:jun...@gmail.com] >>> Sent: Friday, April 18, 2014 11:37 AM >>> To: users@kafka.apache.org >>> Subject: Re: commitOffsets by partition 0.8-beta >>> >>> We don't have the ability to commit offset at the partition level now. >>> This feature probably won't be available until we are done with the >>> consumer rewrite, which is 3-4 months away. >>> >>> If you want to do sth now and don't want to use SimpleConsumer, >>> another hacky way is to turn off auto offset commit and write the >>> offset to ZK in the right path yourself in the app. >>> >>> Thanks, >>> >>> Jun >>> >>> >>> On Fri, Apr 18, 2014 at 10:02 AM, Seshadri, Balaji < >>> balaji.sesha...@dish.com >>>> wrote: >>> >>>> Hi, >>>> >>>> We have use case in DISH where we need to stop the consumer when >>>> we have issues in proceeding further to database or another back end. >>>> >>>> We update offset manually for each consumed message. There are 4 >>>> threads(e.g) consuming from same connector and when one thread >>>> commits the offset there is chance that data for all other threads >>>> also get >>> committed. >>>> >>>> We don't want to go with this to prod as we are going to take >>>> first step of replacing traditional broker with Kafka for business >>>> critical process, is it ok if we add commit >>>> Offset(Topic,partition) method that commits only the consumed data for >>>> that particular thread. >>>> >>>> At this point we don't want to change our framework to use Simple >>>> Consumer as it is lots of work for us. >>>> >>>> Please let us know the effect of committing the offset per >>>> partition being consumed by the thread. We have around 131 >>>> partitions per topic and around >>>> 20 topics. >>>> >>>> Thanks, >>>> >>>> Balaji >>>> >>>> >>>> >>> >>