Hi 

I have a couple of questions.

1.- When Spark shutdowns or fails doc states that "In case of a failure or
intentional shutdown, you can recover the previous progress and state of a
previous query, and continue where it left off. " 
-To acheive this do I just need to set the checkpoint dir as "option" in my
query?  
-When recovery is done, the "batchId" number will  be the same as  before?
(Just the same before spark shutdown)

2.- Is the "addBatch" method of a Sink executed in parallel? If not, can it
be implemented in a way to execute in parallel? 
E.g. batchId-1 comes in and whle processing it (addBatch) batchId-2 comes
in, will they be executed in parallel?

Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Structured-Stream-Behavior-on-failure-tp27558.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to