Hello,

Running Spark examples fails on one machine, but succeeds in Virtual
Machine with exact same Spark & Java version installed.

The weird part it fails on one machine, but runs successfully on VM.

Did anyone face same problem ? Any solution tip ?

Thanks in advance.

*Spark version*: spark-1.3.1-bin-hadoop2.4
*Java Version*:
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

When i want to execute SparkPi example using this command:

./bin/spark-submit --class org.apache.spark.examples.SparkPi     --master
local lib/spark-examples*.jar     4

The job fails with this log below:

15/06/08 19:03:37 INFO spark.SparkContext: Running Spark version 1.3.1
15/06/08 19:03:37 WARN util.Utils: Your hostname, edadashov-wsl resolves to
a loopback address: 127.0.1.1; using 10.0.53.59 instead (on interface em1)
15/06/08 19:03:37 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind
to another address
15/06/08 19:03:37 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/06/08 19:03:37 INFO spark.SecurityManager: Changing view acls to:
edadashov
15/06/08 19:03:37 INFO spark.SecurityManager: Changing modify acls to:
edadashov
15/06/08 19:03:37 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(edadashov); users with modify permissions: Set(edadashov)
15/06/08 19:03:37 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/08 19:03:37 INFO Remoting: Starting remoting
15/06/08 19:03:38 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkdri...@edadashov-wsl.internal.salesforce.com:46368]
15/06/08 19:03:38 INFO util.Utils: Successfully started service
'sparkDriver' on port 46368.
15/06/08 19:03:38 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/08 19:03:38 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/08 19:03:38 INFO storage.DiskBlockManager: Created local directory at
/tmp/spark-0d14f274-6724-4ead-89f2-8ff1975d6e72/blockmgr-2b418147-083f-4bc9-9375-25066f3a2495
15/06/08 19:03:38 INFO storage.MemoryStore: MemoryStore started with
capacity 265.1 MB
15/06/08 19:03:38 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-e1576f6e-9aa7-4102-92e8-d227e9a00ff6/httpd-1f2ca961-d98e-4f19-9591-874dd74c833f
15/06/08 19:03:38 INFO spark.HttpServer: Starting HTTP Server
15/06/08 19:03:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/08 19:03:38 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:47487
15/06/08 19:03:38 INFO util.Utils: Successfully started service 'HTTP file
server' on port 47487.
15/06/08 19:03:38 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/08 19:03:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/08 19:03:38 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
15/06/08 19:03:38 INFO util.Utils: Successfully started service 'SparkUI'
on port 4040.
15/06/08 19:03:38 INFO ui.SparkUI: Started SparkUI at
http://edadashov-wsl.internal.salesforce.com:4040
15/06/08 19:03:38 INFO spark.SparkContext: Added JAR
file:/home/edadashov/tools/spark-1.3.1-bin-cdh4/lib/spark-examples-1.3.1-hadoop2.0.0-mr1-cdh4.2.0.jar
at
http://10.0.53.59:47487/jars/spark-examples-1.3.1-hadoop2.0.0-mr1-cdh4.2.0.jar
with timestamp 1433815418402
15/06/08 19:03:38 INFO executor.Executor: Starting executor ID <driver> on
host localhost
15/06/08 19:03:38 INFO util.AkkaUtils: Connecting to HeartbeatReceiver:
akka.tcp://
sparkdri...@edadashov-wsl.internal.salesforce.com:46368/user/HeartbeatReceiver
15/06/08 19:03:38 INFO netty.NettyBlockTransferService: Server created on
41404
15/06/08 19:03:38 INFO storage.BlockManagerMaster: Trying to register
BlockManager
15/06/08 19:03:38 INFO storage.BlockManagerMasterActor: Registering block
manager localhost:41404 with 265.1 MB RAM, BlockManagerId(<driver>,
localhost, 41404)
15/06/08 19:03:38 INFO storage.BlockManagerMaster: Registered BlockManager
15/06/08 19:03:38 INFO spark.SparkContext: Starting job: reduce at
SparkPi.scala:35
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Got job 0 (reduce at
SparkPi.scala:35) with 4 output partitions (allowLocal=false)
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce
at SparkPi.scala:35)
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Parents of final stage:
List()
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Missing parents: List()
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Submitting Stage 0
(MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing
parents
15/06/08 19:03:38 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Stage 0 (reduce at
SparkPi.scala:35) failed in Unknown s
15/06/08 19:03:38 INFO scheduler.DAGScheduler: Job 0 failed: reduce at
SparkPi.scala:35, took 0.063253 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task serialization failed:
java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:422)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org
$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)



Java example failure:

bin/spark-submit --master local --class
org.apache.spark.examples.JavaWordCount  lib/spark-examples*.jar
    README.md

Error log :

15/06/08 19:12:33 INFO spark.SparkContext: Running Spark version 1.3.1
15/06/08 19:12:33 WARN util.Utils: Your hostname, edadashov-wsl resolves to
a loopback address: 127.0.1.1; using 10.0.53.59 instead (on interface em1)
15/06/08 19:12:33 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind
to another address
15/06/08 19:12:33 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/06/08 19:12:33 INFO spark.SecurityManager: Changing view acls to:
edadashov
15/06/08 19:12:33 INFO spark.SecurityManager: Changing modify acls to:
edadashov
15/06/08 19:12:33 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(edadashov); users with modify permissions: Set(edadashov)
15/06/08 19:12:33 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/06/08 19:12:33 INFO Remoting: Starting remoting
15/06/08 19:12:34 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkdri...@edadashov-wsl.internal.salesforce.com:48020]
15/06/08 19:12:34 INFO util.Utils: Successfully started service
'sparkDriver' on port 48020.
15/06/08 19:12:34 INFO spark.SparkEnv: Registering MapOutputTracker
15/06/08 19:12:34 INFO spark.SparkEnv: Registering BlockManagerMaster
15/06/08 19:12:34 INFO storage.DiskBlockManager: Created local directory at
/tmp/spark-e37490b5-ddbe-477d-9852-af3fee5d5ca1/blockmgr-98a91fd2-b90c-45db-96c8-87131f1771f5
15/06/08 19:12:34 INFO storage.MemoryStore: MemoryStore started with
capacity 265.1 MB
15/06/08 19:12:34 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-769c573a-c5c9-4f0c-a226-1fc507a552b5/httpd-38f42d18-1c25-47ed-a100-9123939e496a
15/06/08 19:12:34 INFO spark.HttpServer: Starting HTTP Server
15/06/08 19:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/08 19:12:34 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:42055
15/06/08 19:12:34 INFO util.Utils: Successfully started service 'HTTP file
server' on port 42055.
15/06/08 19:12:34 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/06/08 19:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/06/08 19:12:34 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
15/06/08 19:12:34 INFO util.Utils: Successfully started service 'SparkUI'
on port 4040.
15/06/08 19:12:34 INFO ui.SparkUI: Started SparkUI at
http://edadashov-wsl.internal.salesforce.com:4040
15/06/08 19:12:34 INFO spark.SparkContext: Added JAR
file:/home/edadashov/tools/spark-1.3.1-bin-hadoop2.4/lib/spark-examples-1.3.1-hadoop2.4.0.jar
at http://10.0.53.59:42055/jars/spark-examples-1.3.1-hadoop2.4.0.jar with
timestamp 1433815954318
15/06/08 19:12:34 INFO executor.Executor: Starting executor ID <driver> on
host localhost
15/06/08 19:12:34 INFO util.AkkaUtils: Connecting to HeartbeatReceiver:
akka.tcp://
sparkdri...@edadashov-wsl.internal.salesforce.com:48020/user/HeartbeatReceiver
15/06/08 19:12:34 INFO netty.NettyBlockTransferService: Server created on
35685
15/06/08 19:12:34 INFO storage.BlockManagerMaster: Trying to register
BlockManager
15/06/08 19:12:34 INFO storage.BlockManagerMasterActor: Registering block
manager localhost:35685 with 265.1 MB RAM, BlockManagerId(<driver>,
localhost, 35685)
15/06/08 19:12:34 INFO storage.BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.IllegalArgumentException
at
org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org
$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79)
at
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
at
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:761)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:589)
at
org.apache.spark.api.java.JavaSparkContext.textFile(JavaSparkContext.scala:191)
at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Reply via email to