Hi

How to ensure in spark streaming 1.3 with kafka that when an application is
killed , last running batch is fully processed and offsets are written to
checkpointing dir.

On Fri, Aug 7, 2015 at 8:56 AM, Shushant Arora <shushantaror...@gmail.com>
wrote:

> Hi
>
> I am using spark stream 1.3 and using custom checkpoint to save kafka
> offsets.
>
> 1.Is doing
> Runtime.getRuntime().addShutdownHook(new Thread() {
>   @Override
>   public void run() {
>   jssc.stop(true, true);
>    System.out.println("Inside Add Shutdown Hook");
>   }
>  });
>
> to handle stop is safe ?
>
> 2.And I need to handle saving checkoinnt in shutdown hook also or driver
> will handle it automatically since it grcaefully stops stream and handle
> completion of foreachRDD function on stream ?
> directKafkaStream.foreachRDD(new Function<JavaRDD<byte[][]>, Void>() {
> }
>
> Thanks
>
>

Reply via email to