unsubscribing myself from the list.
| |
Sophia
|
|
邮箱:sln-1...@163.com
|
签名由 网易邮箱大师 定制
退订
| |
Sophia
|
|
邮箱:sln-1...@163.com
|
签名由 网易邮箱大师 定制
With the yarn-client mode,I submit a job from client to yarn,and the spark
file spark-env.sh:
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_EXECUTOR_INSTANCES=4
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=2G
SPARK_YARN_APP_NAME="Spar
When I run spark in cloudera of CDH5 with service spark-master start
command,it turns out that Spark master is dead and pid file exists,What can
I do to solve the problem?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-is-dead-and-pid-file-exists-tp68
Thank you
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-client-mode-question-tp6213p6224.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
But,I don't understand this point,is it necessary to deploy slave node of
spark in the yarn node?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-client-mode-question-tp6213p6216.html
Sent from the Apache Spark User List mailing list archive at Nabble.
As the yarn-client mode,will spark be deployed in the node of yarn? If it is
deployed only in the client,can spark submit the job to yarn?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-client-mode-question-tp6213.html
Sent from the Apache Spark User L
How did you deal with this problem finally?I also met with it.
Best regards,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkContext-startup-time-out-tp1753p5739.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
How did you deal with this problem, I have met with it these days.God bless
me.
Best regard,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkContext-startup-time-out-tp1753p5738.html
Sent from the Apache Spark User List mailing list archive at Nabble.c
I have tryed to see the log,but the log4j.properties cannot work,how to do?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/log4j-question-tp412p5471.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
My configuration is just like this,the slave's node has been configuate,but I
donnot know what's happened to the shark?Can you help me Sir?
shark-env.sh
export SPARK_USER_HOME=/root
export SPARK_MEM=2g
export SCALA_HOME="/root/scala-2.11.0-RC4"
export SHARK_MASTER_MEM=1g
export HIVE_CONF_DIR="/usr/
Hi
Why I always confront remoting error:
akka.remote.remoteTransportException and
java.util.concurrent.timeoutException?
Best Regards,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/build-shark-hadoop-CDH5-on-hadoop2-0-0-CDH4-tp5574p5629.html
Sent from the
When I run the shark command line,it turns out like this,and I cannot see
something like "shark>".How can I do? the log:
-
Starting the Shark Command Line Client
14/05/12 16:32:49 WARN conf.Configuration: mapred.max.split.size is
deprecated. Instead, use mapreduc
I have built shark in sbt way,but the sbt exception turn out:
[error] sbt.resolveException:unresolved dependency:
org.apache.hadoop#hadoop-client;2.0.0: not found.
How can I do to build it well?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/build-shark-ha
Hi,everyone,
[root@CHBM220 spark-0.9.1]#
SPARK_JAR=.assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
./bin/spark-class org.apache.spark.deploy.yarn.Client --jar
examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.1.jar --class
org.apache.spark.examples.SparkPi --args yar
[root@CHBM220 spark-0.9.1]#
SPARK_JAR=.assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
./bin/spark-class org.apache.spark.deploy.yarn.Client --jar
examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.1.jar --class
org.apache.spark.examples.SparkPi --args yarn-standalone
I have tryed to see the log,but the log4j.properties cannot work,how to do to
see the running logs?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/log4j-question-tp412p5472.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I have modified it in spark-env.sh,but it turns out that it does not work.So
coufused.
Best Regards
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/If-it-due-to-my-file-has-been-breakdown-tp5438p5442.html
Sent from the Apache Spark User List mailing list arc
Hi all,
[root@sophia spark-0.9.1]#
SPARK_JAR=.assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
./bin/spark-class org.apache.spark.deploy.yarn.Client\--jar
examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.1.jar\--class
org.apache.spark.examples.SparkPi\--args yarn
Hi all,
#./sbt/sbt assembly
Launching sbt from sbt/sbt-launch-0.12.4.jar
Invalid or corrupt jarfile sbt/sbt-launch-0.12.4.jar
Why cannot I run sbt well?
Best regards,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-can-I-run-sbt-tp5429.html
Sent from the
Hi all,
I have make HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which
contains the (client side) configuration files for the hadoop cluster.
The command to launch the YARN Client which I run is like this:
#
SPARK_JAR=./~/spark-0.9.1/assembly/target/scala-2.10/spark-assembly_2.10-0.9
Hey you guys,
What is the different in spark on yarn mode and standalone mode about
resource schedule?
Wish you happy everyday.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/different-in-spark-on-yarn-mode-and-standalone-mode-tp5300.html
Sent from the Apac
It maybe caused by this.I use the CDH4 version and I will try to configure
the HADOOP_HOME.Thank you.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/the-spark-configuage-tp5098p5299.html
Sent from the Apache Spark User List mailing list archive at Nabble.co
Hi,
when I configue spark, run the shell instruction:
./spark-shellit told me like this:
WARN:NativeCodeLoader:Uable to load native-hadoop livrary for your
builtin-java classes where applicable,when it connect to ResourceManager,it
stopped. What should I DO?
Wish your reply
--
View this mess
Hi,I am sophia.
I followed the blog from the Internet to configure and test spark on
Yarn,which has configue hadoop 2.0.0-CDH4.The spark version is 0.9.1,the
scala version is 2.11.0-RC4
cd spark-0.9.1
SPARK_HADOOP_VERSION=2.0.0-cdh4.2.1 SPARK_YARN=true sbt/sbt assembly
This cannot work,Invalid
25 matches
Mail list logo