Hi,

A question about spark streaming handling of failed micro batch.

After a certain amount of task failures, there are no more retries, and the
entire batch fails.
What seems to happen next is that this batch is ignored and the next micro
batch begins, which means not all the data has been processed.

Is there a way to configure the spark streaming application to not continue
to the next batch, but rather stop the streaming context upon a micro batch
failure (After all task retries have been exhausted)?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-streaming-micro-batch-failure-handling-tp27110.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to