Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-06 Thread Alonso Isidoro Roman
Hi, just to update the thread, i have just submited a simple wordcount job using yarn using this command: [cloudera@quickstart simple-word-count]$ spark-submit --class com.example.Hello --master yarn --deploy-mode cluster --driver-memory 1024Mb --executor-memory 1G --executor-cores 1 target/scala-

Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-06 Thread Mich Talebzadeh
have you tried master local that should work. This works as a test ${SPARK_HOME}/bin/spark-submit \ --driver-memory 2G \ --num-executors 1 \ --executor-memory 2G \ --master local[2] \ --executor-cores 2 \

Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-06 Thread Alonso Isidoro Roman
Hi guys, i finally understand that i cannot use sbt-pack to use programmatically the spark-streaming job as unix commands, i have to use yarn or mesos in order to run the jobs. I have some doubts, if i run the spark streaming jogs as yarn client mode, i am receiving this exception: [cloudera@qu

Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-04 Thread Mich Talebzadeh
Hi, Spark works in local, standalone and yarn-client mode. Start as master = local. That is the simplest model.You DO not need to start $SPAK_HOME/sbin/start-master.sh and $SPAK_HOME/sbin/start-slaves.sh Also you do not need to specify all that in spark-submit. In the Scala code you can do val

Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-04 Thread Alonso Isidoro Roman
onfiguration at runtime. > > “ > > > > *David Newberger* > > > > *From:* Alonso Isidoro Roman [mailto:alons...@gmail.com] > *Sent:* Friday, June 3, 2016 10:37 AM > *To:* David Newberger > *Cc:* user@spark.apache.org > *Subject:* Re: About a problem running

RE: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-03 Thread David Newberger
a problem running a spark job in a cdh-5.7.0 vmware image. Thank you David, so, i would have to change the way that i am creating SparkConf object, isn't? I can see in this link<http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_running_spark_on_yarn.html#concept_ysw_

Re: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-03 Thread Alonso Isidoro Roman
Thank you David, so, i would have to change the way that i am creating SparkConf object, isn't? I can see in this link that the way to run a spark job using YARN is using this kin

RE: About a problem running a spark job in a cdh-5.7.0 vmware image.

2016-06-03 Thread David Newberger
Alonso, The CDH VM uses YARN and the default deploy mode is client. I’ve been able to use the CDH VM for many learning scenarios. http://www.cloudera.com/documentation/enterprise/latest.html http://www.cloudera.com/documentation/enterprise/latest/topics/spark.html David Newberger From: Alonso