Hi all,
We have a web application that connects to a Spark cluster to trigger some
calculation there. It also caches big amount of data in the Spark executors'
cache.
To meet high availability requirements we need to run 2 instances of our web
application on different hosts. Doing this straightfo
Hi Andrew,
The behavior that I see now is that under the hood it tries to reconnect
endlessly. While this lasts, the thread that tries to fire a new task is
blocked at JobWaiter.awaitResult() and never gets released.
The full stacktrace for spark-1.0.2 is:
"jmsContainer-7" prio=10 tid=0x7f18f
Hi all,
I am running a standalone spark cluster with a single master. No HA or
failover is configured explicitly (no ZooKeeper etc).
What is the default designed behavior for submission of new jobs when a
single master went down or became unreachable?
I couldn't find it documented anywhere.
Than