If you fail the task (throw an exception) it will be retried

On Thu, Sep 17, 2015 at 4:56 PM, david w <dfw...@gmail.com> wrote:

> I am using spark stream to receive data from kafka, and then write result
> rdd
> to external database inside foreachPartition(). All thing works fine, my
> question is how can we handle no data loss if there is database connection
> failure, or other exception happened during write data to external storage.
> Is there any way we can notify spark streaming to replay that RDD or some
> ack mechanism?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-streaming-to-database-exception-handling-tp24728.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to