Hi Andrew,
I'm actually using spark-submit, and I tried using
spark.executor.extraJavaOpts to configure tachyon client to connect to
Tachyon HA master, however the configuration settings were not picked up.
On the other hand when I set the same tachyon configuration parameters
through SPARK_JAVA_
Well, even before spark-submit the standard way of setting spark
configurations is to create a new SparkConf, set the values in the conf,
and pass this to the SparkContext in your application. It's true that this
involves "hard-coding" these configurations in your application, but these
configurati
thanks for the detailed answer andrew. thats helpful.
i think the main thing thats bugging me is that there is no simple way for
an admin to always set something on the executors for a production
environment (an akka timeout comes to mind). yes i could use
spark-defaults for that, although that m
for a jvm application its not very appealing to me to use spark submit
my application uses hadoop, so i should use "hadoop jar", and my
application uses spark, so it should use "spark-submit". if i add a piece
of code that uses some other system there will be yet another suggested way
to launch
Hi Koert and Lukasz,
The recommended way of not hard-coding configurations in your application
is through conf/spark-defaults.conf as documented here:
http://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties.
However, this is only applicable to
spark-submit, so t
still struggling with SPARK_JAVA_OPTS being deprecated. i am using spark
standalone.
for example if i have a akka timeout setting that i would like to be
applied to every piece of the spark framework (so spark master, spark
workers, spark executor sub-processes, spark-shell, etc.). i used to do
th
Hi,
I tried to use SPARK_JAVA_OPTS in spark-env.sh as well as conf/java-opts
file to set additional java system properties. In this case I could connect
to tachyon without any problem.
However when I tried setting executor and driver extraJavaOptions in
spark-defaults.conf it doesn't.
I suspect
Hi,
I'm facing similar problem
According to: http://tachyon-project.org/Running-Spark-on-Tachyon.html
in order to allow tachyon client to connect to tachyon master in HA mode you
need to pass 2 system properties:
-Dtachyon.zookeeper.address=zookeeperHost1:2181,zookeeperHost2:2181
-Dtachyon.use
Just wondering - how are you launching your application? If you want
to set values like this the right way is to add them to the SparkConf
when you create a SparkContext.
val conf = new SparkConf().set("spark.akka.frameSize",
"1").setAppName(...).setMaster(...)
val sc = new SparkContext(conf)
hey patrick,
i have a SparkConf i can add them too. i was looking for a way to do this
where they are not hardwired within scala, which is what SPARK_JAVA_OPTS
used to do.
i guess if i just set -Dspark.akka.frameSize=1 on my java app launch
then it will get picked up by the SparkConf too right?
10 matches
Mail list logo