Jay, thanks for the response.
Regarding the new consumer API for 0.9, I've been reading through the code
for it and thinking about how it fits in to the existing Spark integration.
So far I've seen some interesting challenges, and if you (or anyone else on
the dev list) have time to provide some h
The 0.9 release still has the old consumer as Jay mentioned but this
specific release is a little unusual in that it also provides a completely
new consumer client.
Based on what I understand, users of Kafka need to upgrade their brokers to
> Kafka 0.9.x first, before they upgrade their clients to
Thanks Jay. Yeah, if we were able to use the old consumer API from 0.9
clients to work with 0.8 brokers that would have been super helpful here. I
am just trying to avoid a scenario where Spark cares about new features
from every new major release of Kafka (which is a good thing) but ends up
having
Dropping Kafka list since this is about a slightly different topic.
Every time we expose the API of a 3rd party application as a public Spark
API has caused some problems down the road. This goes from Hadoop, Tachyon,
Kafka, to Guava. Most of these are used for input/output.
The good thing is tha
Hi Kafka devs,
I come to you with a dilemma and a request.
Based on what I understand, users of Kafka need to upgrade their brokers to
Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.
However, that presents a problem to other projects that integrate with
Kafka (Spark, Flume, S