Yes, I was clearly confused :-) On Fri, Apr 24, 2015 at 9:37 AM, Sean Lydon <lydon.s...@gmail.com> wrote:
> Thanks for the responses. Ewen is correct that I am referring to the > *new* consumer (org.apache.kafka.clients.consumer.KafkaConsumer). > > I am extending the consumer to allow my applications more control over > committed offsets. I really want to get away from zookeeper (so using > the offset storage), and re-balancing is something I haven't really > needed to tackle in an automated/seamless way. Either way, I'll hold > off going further down this road until there is more interest. > > @Gwen > I set up a single consumer without partition.assignment.strategy or > rebalance.callback.class. I was unable to subscribe to just a topic > ("Unknown api code 11" on broker), but I could subscribe to a > topicpartition. This makes sense as I would need to handle re-balance > outside the consumer. Things functioned as expected (well I have an > additional minor fix to code from KAFKA-2121), and the only exceptions > on broker were due to closing consumers (which I have become > accustomed to). My tests are specific to my extended version of the > consumer, but they basically do a little writing and reading with > different serde classes with application controlled commits (similar > to onSuccess and onFailure after each record, but with tolerance for > out of order acknowledgements). > > If you are interested, here is the patch of the hack against trunk. > > On Thu, Apr 23, 2015 at 10:27 PM, Ewen Cheslack-Postava > <e...@confluent.io> wrote: > > @Neha I think you're mixing up the 0.8.1/0.8.2 updates and the > 0.8.2/0.8.3 > > that's being discussed here? > > > > I think the original question was about using the *new* consumer > ("clients > > consumer") with 0.8.2. Gwen's right, it will use features not even > > implemented in the broker in trunk yet, let alone the 0.8.2. > > > > I don't think the "enable.commit.downgrade" type option, or supporting > the > > old protocol with the new consumer at all, makes much sense. You'd end up > > with some weird hybrid of simple and high-level consumers -- you could > use > > offset storage, but you'd have to manage rebalancing yourself since none > of > > the coordinator support would be there. > > > > > > On Thu, Apr 23, 2015 at 9:22 PM, Neha Narkhede <n...@confluent.io> > wrote: > > > >> My understanding is that ideally the 0.8.3 consumer should work with an > >> 0.8.2 broker if the offset commit config was set to "zookeeper". > >> > >> The only thing that might not work is offset commit to Kafka, which > makes > >> sense since the 0.8.2 broker does not support Kafka based offset > >> management. > >> > >> If we broke all kinds of offset commits, then it seems like a > regression, > >> no? > >> > >> On Thu, Apr 23, 2015 at 7:26 PM, Gwen Shapira <gshap...@cloudera.com> > >> wrote: > >> > >> > I didn't think 0.8.3 consumer will ever be able to talk to 0.8.2 > >> > broker... there are some essential pieces that are missing in 0.8.2 > >> > (Coordinator, Heartbeat, etc). > >> > Maybe I'm missing something. It will be nice if this will work :) > >> > > >> > Mind sharing what / how you tested? Were there no errors in broker > >> > logs after your fix? > >> > > >> > On Thu, Apr 23, 2015 at 5:37 PM, Sean Lydon <lydon.s...@gmail.com> > >> wrote: > >> > > Currently the clients consumer (trunk) sends offset commit requests > of > >> > > version 2. The 0.8.2 brokers fail to handle this particular request > >> > > with a: > >> > > > >> > > java.lang.AssertionError: assertion failed: Version 2 is invalid for > >> > > OffsetCommitRequest. Valid versions are 0 or 1. > >> > > > >> > > I was able to make this work via a forceful downgrade of this > >> > > particular request, but I would like some feedback on whether a > >> > > "enable.commit.downgrade" configuration would be a tolerable method > to > >> > > allow 0.8.3 consumers to interact with 0.8.2 brokers. I'm also > >> > > interested in this even being a goal worth pursuing. > >> > > > >> > > Thanks, > >> > > Sean > >> > > >> > >> > >> > >> -- > >> Thanks, > >> Neha > >> > > > > > > > > -- > > Thanks, > > Ewen > -- Thanks, Neha