Re: Graceful shutdown for Spark Streaming

2015-08-10 Thread Tathagata Das
Note that this is true only from Spark 1.4 where the shutdown hooks were added. On Mon, Aug 10, 2015 at 12:12 PM, Michal Čizmazia wrote: > From logs, it seems that Spark Streaming does handle *kill -SIGINT* with > graceful shutdown. > > Please could you confirm? > > Thanks! > > On 30 July 2015 a

Re: Graceful shutdown for Spark Streaming

2015-08-10 Thread Michal Čizmazia
>From logs, it seems that Spark Streaming does handle *kill -SIGINT* with graceful shutdown. Please could you confirm? Thanks! On 30 July 2015 at 08:19, anshu shukla wrote: > Yes I was doing same , if You mean that this is the correct way to do > Then I will verify it once more in my cas

Re: Graceful shutdown for Spark Streaming

2015-07-30 Thread anshu shukla
Yes I was doing same , if You mean that this is the correct way to do Then I will verify it once more in my case . On Thu, Jul 30, 2015 at 1:02 PM, Tathagata Das wrote: > How is sleep not working? Are you doing > > streamingContext.start() > Thread.sleep(xxx) > streamingContext.stop() > >

Re: Graceful shutdown for Spark Streaming

2015-07-30 Thread Tathagata Das
How is sleep not working? Are you doing streamingContext.start() Thread.sleep(xxx) streamingContext.stop() On Wed, Jul 29, 2015 at 6:55 PM, anshu shukla wrote: > If we want to stop the application after fix-time period , how it will > work . (How to give the duration in logic , in my case sle

Re: Graceful shutdown for Spark Streaming

2015-07-29 Thread anshu shukla
If we want to stop the application after fix-time period , how it will work . (How to give the duration in logic , in my case sleep(t.s.) is not working .) So i used to kill coarseGrained job at each slave by script .Please suggest something . On Thu, Jul 30, 2015 at 5:14 AM, Tathagata Das wr

Re: Graceful shutdown for Spark Streaming

2015-07-29 Thread Tathagata Das
StreamingContext.stop(stopGracefully = true) stops the streaming context gracefully. Then you can safely terminate the Spark cluster. They are two different steps and needs to be done separately ensuring that the driver process has been completely terminated before the Spark cluster is the terminat