Hi Abel and Krishna,
You shouldn't have to do any manual rsync'ing. If you're using HDP, then
you can just change the configs through Ambari. As for passing the assembly
jar to all executor nodes, the Spark on YARN code automatically uploads the
jar to a distributed cache (HDFS) and all executors
Abel,
I rsync the spark-1.0.1 directory to all the nodes. Then whenever the
configuration changes, rsync the conf directory.
Cheers
On Wed, Jul 9, 2014 at 2:06 PM, Abel Coronado Iruegas <
acoronadoirue...@gmail.com> wrote:
> Hi everybody
>
> We have hortonworks cluster with many nodes, we wa
Hi everybody
We have hortonworks cluster with many nodes, we want to test a deployment
of Spark. Whats the recomended path to follow?
I mean we can compile the sources in the Name Node. But i don't really
understand how to pass the executable jar and configuration to the rest of
the nodes.
Thank