Hi Jared,

You can launch a Spark application even with just a single node in YARN,
provided that the node has enough resources to run the job.

It might also be good to note that when YARN calculates the memory
allocation for the driver and the executors, there is an additional memory
overhead that is added for each executor then it gets rounded up to the
nearest GB, IIRC. So the 4G driver-memory + 4x2G executor memory do not
necessarily translate to a total of 12G memory allocation. It would be more
than that, so the node would need to have more than 12G of memory for the
job to execute in YARN. You should be able to see something like "No
resources available in cluster.." in the application master logs in YARN if
that is the case.

HTH,
Deng

On Tue, Jul 5, 2016 at 4:31 PM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi guys,
>
> I set up pseudo hadoop/yarn cluster on my labtop.
>
> I wrote a simple spark streaming program as below to receive messages with
> MQTTUtils.
> conf = new SparkConf().setAppName("Monitor&Control");
> jssc = new JavaStreamingContext(conf, Durations.seconds(1));
> JavaReceiverInputDStream<String> inputDS = MQTTUtils.createStream(jssc,
> brokerUrl, topic);
>
> inputDS.print();
> jssc.start();
> jssc.awaitTermination()
>
> If I submitted the app with "--master local[2]", it works well.
>
> spark-submit --master local[4] --driver-memory 4g --executor-memory 2g
> --num-executors 4 target/CollAna-1.0-SNAPSHOT.jar
>
> If I submitted with "--master yarn",  no output for "inputDS.print()".
>
> spark-submit --master yarn --deploy-mode cluster --driver-memory 4g
> --executor-memory 2g --num-executors 4 target/CollAna-1.0-SNAPSHOT.jar
>
> Is it possible to launch spark application on yarn with only one single
> node?
>
>
> Thanks for your advice.
>
>
> Jared
>
>
>

Reply via email to