Have you followed this? http://spark.apache.org/docs/latest/spark-standalone.html

It sounds more like your master is not connected to any executor. Hence, no resources are available.

Am 04.09.16 um 05:34 schrieb kant kodali:
I don't think my driver program which is running on my local machine can connect to worker/executor machine because spark UI lists private ip's for worker machine but I can connect to master from the driver because of this setting export SPARK_PUBLIC_DNS="52.44.36.224". really not sure how to fix this or what I am missing?

Any help would be great.
Thanks!



On Sat, Sep 3, 2016 5:39 PM, kant kodali kanth...@gmail.com <mailto:kanth...@gmail.com> wrote:

    Hi Guys,

    I am running my driver program on my local machine and my spark
    cluster is on AWS. The big question is I don't know what are the
    right settings to get around this public and private ip thing on
    AWS? my spark-env.sh currently has the the following lines

    exportSPARK_PUBLIC_DNS="52.44.36.224"
    exportSPARK_WORKER_CORES=12
    exportSPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4"

    I am seeing the lines below when I run my driver program on my
    local machine. not sure what is going on ?



    16/09/03 17:32:15 INFO DAGScheduler: Submitting 50 missing tasks
    from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at
    Consumer.java:41)
    16/09/03 17:32:15 INFO TaskSchedulerImpl: Adding task set 0.0 with
    50 tasks
    16/09/03 17:32:30 WARN TaskSchedulerImpl: Initial job has not
    accepted any resources; check your cluster UI to ensure that
    workers are registered and have sufficient resources
    16/09/03 17:32:45 WARN TaskSchedulerImpl: Initial job has not
    accepted any resources; check your cluster UI to ensure that
    workers are registered and have sufficient resources


Reply via email to