Hi everyone,

I've been experimenting, and somewhat of a newbie for Spark. I was
wondering, if there is any way, that I can use a custom cluster manager
implementation with Spark. Basically, as I understood, at the moment, the
inbuilt modes supported are with standalone, Mesos and  Yarn. My
requirement is basically a simple clustering solution with high
availability of the master. I don't want to use a separate Zookeeper
cluster, since this would complicate my deployment, but rather, I would
like to use something like Hazelcast, which has a peer-to-peer cluster
coordination implementation.

I found that, there is already this JIRA [1], which requests for a custom
persistence engine, I guess for storing state information. So basically,
what I would want to do is, use Hazelcast to use for leader election, to
make an existing node the master, and to lookup the state information from
the distributed memory. Appreciate any help on how to archive this. And if
it useful for a wider audience, hopefully I can contribute this back to the
project.

[1] https://issues.apache.org/jira/browse/SPARK-1180

Cheers,
Anjana.

Reply via email to