You can run multiple Spark clusters against one ZK cluster.   Just use this
config to set independent ZK roots for each cluster:

     spark.deploy.zookeeper.dir
     The directory in ZooKeeper to store recovery state (default: /spark).

-Jeff


From: Sean Owen <so...@cloudera.com>
To: Akhil Das <ak...@sigmoidanalytics.com>
Cc: Michal Klos <michal.klo...@gmail.com>, User <user@spark.apache.org>
Date: Wed, 22 Apr 2015 11:05:46 +0100
Subject: Re: Multiple HA spark clusters managed by 1 ZK cluster?
Not that i've tried it, but, why couldn't you use one ZK server? I
don't see a reason.

On Wed, Apr 22, 2015 at 7:40 AM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:
> It isn't mentioned anywhere in the doc, but you will probably need
separate
> ZK for each of your HA cluster.
>
> Thanks
> Best Regards

Reply via email to