Hi,

You're right...killing the spark streaming job is the way to go. If a batch
was completed successfully, Spark Streaming will recover from the
controlled failure and start where it left off. I don't think there's other
way to do it.

Pozdrawiam,
Jacek Laskowski
----
https://about.me/JacekLaskowski
Spark Structured Streaming https://bit.ly/spark-structured-streaming
Mastering Apache Spark 2 https://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

On Wed, Nov 15, 2017 at 5:18 PM, KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:

> Hi,
>
> I am new in the usage of spark streaming. I have developed one spark
> streaming job which runs every 30 minutes with checkpointing directory.
>
> I have to implement minor change, shall I kill the spark streaming job
> once the batch is completed using yarn application -kill command and update
> the jar file?
>
> Question I have is, if I follow the above approach will spark streaming
> picks up data from offset saved in checkpoint after restart?
>
> is there any other better approaches you have. Thanks in advance for your
> suggestions.
>
> Thanks,
> Asmath
>

Reply via email to