Spark 2.0 has experemental support of kafka 10.0 and you have to explicitly
define this in your build e.g. spark-streaming-kafka-0-10
On 13 Oct 2016 16:10, "Ben Davison" wrote:
> I *think* Spark 2.0.0 has a Kafka 0.8 consumer, which would still use the
> old Zookeeper method.
>
> The use the new
Hello,
I am running kafka 0.9.0.1, there I have 1 broker witth around 5 topics
each has 1 partition, I manage kafka via KafkaManager webui tool,
periodically something happens with Kafka and it stops seeing topics,
despite that process runs (withou errors) and topics available on the disk.
In order