You have an issue with your cluster setup. Can you paste your
conf/spark-env.sh and the conf/slaves files here?

The reason why your job is running fine is because you set the master
inside the job as local[*] which runs in local mode (not in standalone
cluster mode).



Thanks
Best Regards

On Mon, May 4, 2015 at 7:26 PM, James Carman <ja...@carmanconsulting.com>
wrote:

> I have the following simple example program:
>
> public class SimpleCount {
>
>     public static void main(String[] args) {
>         final String master = System.getProperty("spark.master",
> "local[*]");
>         System.out.printf("Running job against spark master %s ...%n",
> master);
>
>         final SparkConf conf = new SparkConf()
>                 .setAppName("simple-count")
>                 .setMaster(master)
>                 .set("spark.eventLog.enabled", "true");
>         final JavaSparkContext sc = new JavaSparkContext(conf);
>
>         JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4, 5,
> 6, 7, 8, 9, 10));
>
>         long n = rdd.count();
>
>         System.out.printf("I counted %d integers.%n", n);
>     }
> }
>
> I start a local master:
>
> export SPARK_MASTER_IP=localhost
>
> sbin/start-master.sh
>
>
> Then, I start a local worker:
>
>
> bin/spark-class org.apache.spark.deploy.worker.Worker -h localhost
> spark://localhost:7077
>
>
>
> When I run the example application:
>
>
> bin/spark-submit --class com.cengage.analytics.SimpleCount  --master
> spark://localhost:7077
> ~/IdeaProjects/spark-analytics/target/spark-analytics-1.0-SNAPSHOT.jar
>
>
> It finishes just fine (and even counts the right number :).  However, I
> get the following log statements in the master's log file:
>
>
> 15/05/04 09:54:14 INFO Master: Registering app simple-count
>
> 15/05/04 09:54:14 INFO Master: Registered app simple-count with ID
> app-20150504095414-0009
>
> 15/05/04 09:54:14 INFO Master: Launching executor
> app-20150504095414-0009/0 on worker worker-20150504095401-localhost-55806
>
> 15/05/04 09:54:17 INFO Master: akka.tcp://sparkDriver@jamess-mbp:55939
> got disassociated, removing it.
>
> 15/05/04 09:54:17 INFO Master: Removing app app-20150504095414-0009
>
> 15/05/04 09:54:17 WARN ReliableDeliverySupervisor: Association with remote
> system [akka.tcp://sparkDriver@jamess-mbp:55939] has failed, address is
> now gated for [5000] ms. Reason is: [Disassociated].
>
> 15/05/04 09:54:17 INFO LocalActorRef: Message
> [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
> Actor[akka://sparkMaster/deadLetters] to
> Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%40127.0.0.1%3A55948-17#800019242]
> was not delivered. [18] dead letters encountered. This logging can be
> turned off or adjusted with configuration settings 'akka.log-dead-letters'
> and 'akka.log-dead-letters-during-shutdown'.
>
> 15/05/04 09:54:17 INFO SecurityManager: Changing view acls to: jcarman
>
> 15/05/04 09:54:17 INFO SecurityManager: Changing modify acls to: jcarman
>
> 15/05/04 09:54:17 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(jcarman);
> users with modify permissions: Set(jcarman)
>
> 15/05/04 09:54:17 INFO Master: akka.tcp://sparkDriver@jamess-mbp:55939
> got disassociated, removing it.
>
> 15/05/04 09:54:17 WARN EndpointWriter: AssociationError
> [akka.tcp://sparkMaster@localhost:7077] ->
> [akka.tcp://sparkWorker@localhost:51252]: Error [Invalid address:
> akka.tcp://sparkWorker@localhost:51252] [
>
> akka.remote.InvalidAssociation: Invalid address:
> akka.tcp://sparkWorker@localhost:51252
>
> Caused by: akka.remote.transport.Transport$InvalidAssociationException:
> Connection refused: localhost/127.0.0.1:51252
>
> ]
>
> 15/05/04 09:54:17 WARN Remoting: Tried to associate with unreachable
> remote address [akka.tcp://sparkWorker@localhost:51252]. Address is now
> gated for 5000 ms, all messages to this address will be delivered to dead
> letters. Reason: Connection refused: localhost/127.0.0.1:51252
>
> 15/05/04 09:54:17 INFO Master: akka.tcp://sparkWorker@localhost:51252 got
> disassociated, removing it.
>
> 15/05/04 09:54:17 WARN EndpointWriter: AssociationError
> [akka.tcp://sparkMaster@localhost:7077] ->
> [akka.tcp://sparkWorker@jamess-mbp:50071]: Error [Invalid address:
> akka.tcp://sparkWorker@jamess-mbp:50071] [
>
> akka.remote.InvalidAssociation: Invalid address:
> akka.tcp://sparkWorker@jamess-mbp:50071
>
> Caused by: akka.remote.transport.Transport$InvalidAssociationException:
> Connection refused: jamess-mbp/192.168.1.45:50071
>
> ]
>
> 15/05/04 09:54:17 WARN Remoting: Tried to associate with unreachable
> remote address [akka.tcp://sparkWorker@jamess-mbp:50071]. Address is now
> gated for 5000 ms, all messages to this address will be delivered to dead
> letters. Reason: Connection refused: jamess-mbp/192.168.1.45:50071
>
> 15/05/04 09:54:17 INFO Master: akka.tcp://sparkWorker@jamess-mbp:50071
> got disassociated, removing it.
>
> 15/05/04 09:54:17 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef:
> Message [org.apache.spark.deploy.DeployMessages$ApplicationFinished] from
> Actor[akka://sparkMaster/user/Master#-1247271270] to
> Actor[akka://sparkMaster/deadLetters] was not delivered. [19] dead letters
> encountered. This logging can be turned off or adjusted with configuration
> settings 'akka.log-dead-letters' and
> 'akka.log-dead-letters-during-shutdown'.
>
> 15/05/04 09:54:17 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef:
> Message [org.apache.spark.deploy.DeployMessages$ApplicationFinished] from
> Actor[akka://sparkMaster/user/Master#-1247271270] to
> Actor[akka://sparkMaster/deadLetters] was not delivered. [20] dead letters
> encountered. This logging can be turned off or adjusted with configuration
> settings 'akka.log-dead-letters' and
> 'akka.log-dead-letters-during-shutdown'.
>
> 15/05/04 09:54:17 WARN Master: Got status update for unknown executor
> app-20150504095414-0009/0
>
>
>
> And, in the worker's console window I see:
>
> 15/05/04 09:54:14 INFO Worker: Asked to launch executor
> app-20150504095414-0009/0 for simple-count
>
> Spark assembly has been built with Hive, including Datanucleus jars on
> classpath
>
> 15/05/04 09:54:14 INFO ExecutorRunner: Launch command: "java" "-cp"
> "::/Users/jcarman/Downloads/spark-1.2.2-bin-hadoop2.4/conf:/Users/jcarman/Downloads/spark-1.2.2-bin-hadoop2.4/lib/spark-assembly-1.2.2-hadoop2.4.0.jar:/Users/jcarman/Downloads/spark-1.2.2-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/Users/jcarman/Downloads/spark-1.2.2-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/Users/jcarman/Downloads/spark-1.2.2-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar"
> "-Dspark.driver.port=55939" "-Xms512M" "-Xmx512M"
> "org.apache.spark.executor.CoarseGrainedExecutorBackend"
> "akka.tcp://sparkDriver@jamess-mbp:55939/user/CoarseGrainedScheduler" "0"
> "localhost" "8" "app-20150504095414-0009" "akka.tcp://sparkWorker@localhost
> :55806/user/Worker"
>
> 15/05/04 09:54:17 INFO Worker: Asked to kill executor
> app-20150504095414-0009/0
>
> 15/05/04 09:54:17 INFO ExecutorRunner: Runner thread for executor
> app-20150504095414-0009/0 interrupted
>
> 15/05/04 09:54:17 INFO ExecutorRunner: Killing process!
>
> 15/05/04 09:54:17 INFO Worker: Executor app-20150504095414-0009/0 finished
> with state KILLED exitStatus 1
>
> 15/05/04 09:54:17 INFO Worker: Cleaning up local directories for
> application app-20150504095414-0009
>
> 15/05/04 09:54:17 WARN ReliableDeliverySupervisor: Association with remote
> system [akka.tcp://sparkExecutor@localhost:55971] has failed, address is
> now gated for [5000] ms. Reason is: [Disassociated].
>
> 15/05/04 09:54:17 INFO LocalActorRef: Message
> [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from
> Actor[akka://sparkWorker/deadLetters] to
> Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40127.0.0.1%3A55973-2#371831611]
> was not delivered. [1] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters' and
> 'akka.log-dead-letters-during-shutdown'.
>
>
> For such a simple example, should I be getting warnings like this?  Am I
> setting up my local cluster incorrectly?
>

Reply via email to