All the basic parameter applies to both client and cluster mode. The only
difference between client and cluster mode is that the driver will be run
in the cluster, and there are some *additional* parameters to configure
that. Other params are common. Isnt it clear from the docs?
On Fri, Jun 19, 20
Thanx alot ! But in client mode Can we assign number of workers/nodes
as a flag parameter to the spark-Submit command .
And by default how it will distribute the load across the nodes.
# Run on a Spark Standalone cluster in client deploy mode
./bin/spark-submit \
--class org.apache.spark.exa
Depends on what cluster manager are you using. Its all pretty well
documented in the online documentation.
http://spark.apache.org/docs/latest/submitting-applications.html
On Fri, Jun 19, 2015 at 2:29 PM, anshu shukla
wrote:
> Hey ,
> *[For Client Mode]*
>
> 1- Is there any way to assign the nu
Hey ,
*[For Client Mode]*
1- Is there any way to assign the number of workers from a cluster should
be used for particular application .
2- If not then how spark scheduler decides scheduling of diif
applications inside one full logic .
say my logic have {inputStream>>
wordsplitter-