I was thinking more about this. Successfully writing a block of msgs to HDFS represents that atomic commit, downstream. However, it is not a 2 or 3-phase transaction, with rollback. The issue is the difference in scope of a downstream aggregate commit and an exactly once upstream commit. I started on an aggregating consumer, but since the iterator is blocking, it is a bit wonky. Either a non-blocking next() to the iterator (throw an exception if no msg available) or a nextMsgs() to get a block of msgs would help a lot. Doing so within a 2 or 3-phase transaction would be a bonus.
On Feb 15, 2014, at 2:34 PM, Clark Breyman <cl...@breyman.com> wrote: > Robert - "exactly once" will be really hard unless you can commit the > offset with the aggregate in an atomic way downstream. That would defend > you against redeliveries (ignore updates from lower offset) while allowing > you to be more failure tolerant on the consuming side. > > > On Sat, Feb 15, 2014 at 12:46 PM, Robert Withers <robert.w.with...@gmail.com >> wrote: > >> We have this version in prod. It has been fine as we commitOffsets after >> every message. After 3 months of this, we rolled out Hadoop, which >> requires aggregation. We started commiting every 2 minutes and we saw that >> the fetchr would get tangled and stop fetching and the consumer would get >> stuck blocking on the iterator. We changed back to after every msg and no >> issues. >> >> NB: could we use the high level consumer and consume blocks of msgs at a >> time? Then commitOffsets on the block? This would help the exactly once >> of an aggregating consumer. >> >> Thank you, >> Robert >> >>> On Feb 15, 2014, at 12:55 PM, Clark Breyman <cl...@breyman.com> wrote: >>> >>> Thanks Bae. I'll report back with our experiences. >>> >>> >>>> On Sat, Feb 15, 2014 at 10:48 AM, Bae, Jae Hyeon <metac...@gmail.com> >> wrote: >>>> >>>> Netflix is using kafka 0.7 and 0.8 with zk 3.4.5, very stable. >>>> >>>>> On Saturday, February 15, 2014, Todd Palino <tpal...@linkedin.com> >> wrote: >>>>> >>>>> We're not at the moment, but I'd definitely be interested in hearing >> your >>>>> results if you do. We're going to be experimenting with the latest >>>> version >>>>> soon to evaluate it. >>>>> >>>>> -Todd >>>>> >>>>> On 2/14/14 4:32 PM, "Clark Breyman" <cl...@breyman.com <javascript:;>> >>>>> wrote: >>>>> >>>>>> Is anyone running 0.8 (or pre-0.8.1) with the latest Zookeeper? Any >>>> known >>>>>> compatibility issues? I didn't see any in JIRA but thought I'd give a >>>>>> shout. >>>> >>
smime.p7s
Description: S/MIME cryptographic signature