Robert - "exactly once" will be really hard unless you can commit the offset with the aggregate in an atomic way downstream. That would defend you against redeliveries (ignore updates from lower offset) while allowing you to be more failure tolerant on the consuming side.
On Sat, Feb 15, 2014 at 12:46 PM, Robert Withers <robert.w.with...@gmail.com > wrote: > We have this version in prod. It has been fine as we commitOffsets after > every message. After 3 months of this, we rolled out Hadoop, which > requires aggregation. We started commiting every 2 minutes and we saw that > the fetchr would get tangled and stop fetching and the consumer would get > stuck blocking on the iterator. We changed back to after every msg and no > issues. > > NB: could we use the high level consumer and consume blocks of msgs at a > time? Then commitOffsets on the block? This would help the exactly once > of an aggregating consumer. > > Thank you, > Robert > > > On Feb 15, 2014, at 12:55 PM, Clark Breyman <cl...@breyman.com> wrote: > > > > Thanks Bae. I'll report back with our experiences. > > > > > >> On Sat, Feb 15, 2014 at 10:48 AM, Bae, Jae Hyeon <metac...@gmail.com> > wrote: > >> > >> Netflix is using kafka 0.7 and 0.8 with zk 3.4.5, very stable. > >> > >>> On Saturday, February 15, 2014, Todd Palino <tpal...@linkedin.com> > wrote: > >>> > >>> We're not at the moment, but I'd definitely be interested in hearing > your > >>> results if you do. We're going to be experimenting with the latest > >> version > >>> soon to evaluate it. > >>> > >>> -Todd > >>> > >>> On 2/14/14 4:32 PM, "Clark Breyman" <cl...@breyman.com <javascript:;>> > >>> wrote: > >>> > >>>> Is anyone running 0.8 (or pre-0.8.1) with the latest Zookeeper? Any > >> known > >>>> compatibility issues? I didn't see any in JIRA but thought I'd give a > >>>> shout. > >> >