Cool, thanks for your feedback.
On Thursday, April 3, 2014 7:20 AM, Tom Graves wrote:
Generally the yarn cluster handles propogating and setting HADOOP_CONF_DIR for
any containers it launches, so it should really just be on your client node
submitting the applications.
I haven't specificall
Generally the yarn cluster handles propogating and setting HADOOP_CONF_DIR for
any containers it launches, so it should really just be on your client node
submitting the applications.
I haven't specifically tried doing what you said, but like you say Spark
doesn't really expose the configurat
Right thanks, that worked.
My goal is to programmatically submit things to the yarn cluster. The
underlying framework we have is a set of property files that specify different
machines for dev, qe, prod. While it's definitely possible to have different
things deployed as the client etc/hadoop di
You should just be making sure your HADOOP_CONF_DIR env variable is correct and
not setting yarn.resourcemanager.address in SparkConf. For Yarn/Hadoop you
need to point it to the configuration files for your cluster. Generally that
setting goes into yarn-site.xml. If just setting it doesn't w