Sorry for interrupting, I have a quick question regarding the retry mechanism 
on failed tasks. I like to know whether there is a way to specify the interval 
between task retry attempts. I have set the spark.task.maxFailures to a 
relatively large number, but due to the unstable network condition and also the 
fact that failed tasks are always retried very fast (at millisecond level as I 
observed), my spark streaming job, which receives docs from Kafka, does a bit 
transformation and then finally sends updated doc into Elasticsearch cluster, 
still fails quite frequently after the maximum retry number is exhausted.

Thanks,
Harry

Reply via email to