Spark Streaming takes care of restarting receivers if it fails.
Regarding the fault-tolerance properties and deployment options, we
made some improvements in the upcoming Spark 1.2. Here is a staged
version of the Spark Streaming programming guide that you can read for
the up-to-date explanation of streaming fault-tolerance semantics.

http://people.apache.org/~tdas/spark-1.2-temp/

On Thu, Dec 11, 2014 at 4:03 PM, twizansk <twiza...@gmail.com> wrote:
> Hi,
>
> I'm looking for resources and examples for the deployment of spark streaming
> in production.  Specifically, I would like to know how high availability and
> fault tolerance of receivers is typically achieved.
>
> The workers are managed by the spark framework and are therefore fault
> tolerant out of the box but it seems like the receiver deployment and
> management is up to me.  Is that correct?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-in-Production-tp20644.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to