Hi All,

I am using the new kafka-spark010 API to consume messages from Kafka
(brokers running kafka 0.10.0.1).

I am seeing continuous following warning only when producer is writing
messages to kafka in parallel (increased
spark.streaming.kafka.consumer.poll.ms to 1024 ms as well) :-

16/09/19 16:44:53 WARN TaskSetManager: Lost task 97.0 in stage 32.0 (TID
4942, host-3): java.lang.AssertionError: assertion failed: Failed to get
records for spark-executor-example topic2 8 1052989 after polling for 1024

while at same time, I see this in spark UI corresponding to that job
topic: topic2    partition: 8    offsets: 1051731 to 1066124

Code :-

val stream = KafkaUtils.createDirectStream[String, String]( ssc,
PreferConsistent, Subscribe[String, String](topics, kafkaParams.asScala) )

stream.foreachRDD {rdd => rdd.filter(_ => false).collect}


Has anyone encountered this with the new API? Is this the expected
behaviour or am I missing something here?

-- 
Regards
Nitin Goyal

Reply via email to