Ben, Vasilij, thanks for you answer.
I forgot to mention I use spark-streaming-kafka-0-10.
On 10/13/2016 03:17 PM, Vasilij Syc wrote:
Spark 2.0 has experemental support of kafka 10.0 and you have to explicitly
define this in your build e.g. spark-streaming-kafka-0-10
On 13 Oct 2016 16:10, "Ben
Spark 2.0 has experemental support of kafka 10.0 and you have to explicitly
define this in your build e.g. spark-streaming-kafka-0-10
On 13 Oct 2016 16:10, "Ben Davison" wrote:
> I *think* Spark 2.0.0 has a Kafka 0.8 consumer, which would still use the
> old Zookeeper method.
>
> The use the new
I *think* Spark 2.0.0 has a Kafka 0.8 consumer, which would still use the
old Zookeeper method.
The use the new consumer offsets the consumer needs to be atleast Kafka 0.9
compatible.
On Thu, Oct 13, 2016 at 1:55 PM, Samy Dindane wrote:
> Hi,
>
> I use Kafka 0.10 with ZK 3.4.6 and my consumers'
Hi,
I use Kafka 0.10 with ZK 3.4.6 and my consumers' offsets aren't stored in the
__consumer_offsets topic but in ZK instead.
That happens whether I let the consumer commit automatically, or commit
manually with enable.auto.commit set to false.
Same behavior with `offsets.storage=kafka`, which