Thanks Neelesh.
Even I have a plan to migrate the offset management to topic based (Kafka
0.8.2) rather zk based . That will make the consumer much faster . If you
have plan to contribute your work for this consumer , that will be great
also.
Dibyendu
On Wed, Apr 1, 2015 at 11:07 PM, Neelesh
Hi Dibyendu,
Thanks for your work on this project. Spark 1.3 now has direct kafka
streams, but still does not provide enough control over partitions and
topics. For example, the streams are fairly statically configured -
RDD.getPartitions() is computed only once, thus making it difficult to use
Hi,
Just to let you know, I have made some enhancement in Low Level Reliable
Receiver based Kafka Consumer (
http://spark-packages.org/package/dibbhatt/kafka-spark-consumer) .
Earlier version uses as many Receiver task for number of partitions of your
kafka topic . Now you can configure desired