Unfortunately, you will have to write that code yourself.
TD
On Tue, Oct 20, 2015 at 11:28 PM, varun sharma
wrote:
> Hi TD,
> Is there any way in spark I can fail/retry batch in case of any
> exceptions or do I have to write code to explicitly keep on retrying?
> Also If some batch fail, I wan
Hi TD,
Is there any way in spark I can fail/retry batch in case of any exceptions
or do I have to write code to explicitly keep on retrying?
Also If some batch fail, I want to block further batches to be processed as
it would create inconsistency in updation of zookeeper offsets and maybe
kill the
That is actually a bug in the UI that got fixed in 1.5.1. The batch is
actually completing with exception, the UI does not update correctly.
On Tue, Oct 20, 2015 at 8:38 AM, varun sharma
wrote:
> Also, As you can see the timestamps in attached image. batches coming
> after the Cassandra server c
Hi TD,
Yes saveToCassandra throws exception. How do I fail that task explicitly if
i catch any exceptions?.
Right now that batch doesn't fail and remain in hung state. Is there any
way I fail that batch so that it can be tried again.
Thanks
Varun
On Tue, Oct 20, 2015 at 2:50 AM, Tathagata Das wr
If cassandra is down, does saveToCassandra throw an exception? If it does,
you can catch that exception and write your own logic to retry and/or no
update. Once the foreachRDD function completes, that batch will be
internally marked as completed.
TD
On Mon, Oct 19, 2015 at 5:48 AM, varun sharma