anybody met this  high availability problem  with zookeeper?

2014-09-12 10:34 GMT+08:00 jason chen <pydisc...@gmail.com>:

> Hi guys,
>
> I configured Spark with the configuration in spark-env.sh:
> export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER
> -Dspark.deploy.zookeeper.url=host1:2181,host2:2181,host3:2181
> -Dspark.deploy.zookeeper.dir=/spark"
>
> And I started spark-shell on one master host1(active):
> MASTER=spark://host1:7077,host2:7077 bin/spark-shell
>
> I stop-master.sh on host1, then access host2 web ui, the worker
> successfully registered to new master host2,
> but the running application, even the completed applications shows
> nothing, did I missing anything when I configure spark HA ?
>
> Thanks !
>
>
>

Reply via email to