You should be able to recompile the streaming-kafka project against 1.2,
let me know if you run into any issues.
>From a usability standpoint, the only relevant thing I can think of that
was added after 1.2 was being able to get the partitionId off of the task
context... you can just use mapPartit
Hi,
You can try This Kafka Consumer for Spark which is also part of Spark
Packages . https://github.com/dibbhatt/kafka-spark-consumer
Regards,
Dibyendu
On Thu, Aug 6, 2015 at 6:48 AM, Sourabh Chandak
wrote:
> Thanks Tathagata. I tried that but BlockGenerator internally uses
> SystemClock which
Thanks Tathagata. I tried that but BlockGenerator internally uses
SystemClock which is again private.
We are using DSE so stuck with Spark 1.2 hence can't use the receiver-less
version. Is it possible to use the same code as a separate API with 1.2?
Thanks,
Sourabh
On Wed, Aug 5, 2015 at 6:13 PM
You could very easily strip out the BlockGenerator code from the Spark
source code and use it directly in the same way the Reliable Kafka Receiver
uses it. BTW, you should know that we will be deprecating the receiver
based approach for the Direct Kafka approach. That is quite flexible, can
give e