spark job is not running on yarn clustor mode

2016-05-17 Thread spark.raj
Hi friends,
I am running spark streaming job on yarn cluster mode but it is failing. It is 
working fine in yarn-client mode. and also spark-examples are running good in 
spark-cluster mode. below is the log file for the spark streaming job on 
yarn-cluster mode. Can anyone help me on this.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/15/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/17 16:17:47 INFO yarn.ApplicationMaster: Registered signal handlers for 
[TERM, HUP, INT]
16/05/17 16:17:48 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/05/17 16:17:48 INFO yarn.ApplicationMaster: ApplicationAttemptId: 
appattempt_1463479181441_0003_02
16/05/17 16:17:49 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(hadoop); users 
with modify permissions: Set(hadoop)
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Starting the user application in 
a separate Thread
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Waiting for spark context 
initialization
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: found keyword== 
userTwitterToken=9ACWejzaHVyxpPDYCHnDsO98U 
01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 
702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 
OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 
9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 
702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 
OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 1
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Waiting for spark context 
initialization ... 
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 2
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue 
May 17 00:00:00 IST 2016
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue 
May 17 00:00:00 IST 2016
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 
nokia,samsung,iphone,blackberry
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = All
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = mo
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = en
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 
retweet
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: Twitter 
Token...[Ljava.lang.String;@3ee5e48d
16/05/17 16:17:49 INFO spark.SparkContext: Running Spark version 1.5.2
16/05/17 16:17:49 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an 
application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or 
worker)

16/05/17 16:17:49 WARN spark.SparkConf: Setting 
'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a 
work-around.
16/05/17 16:17:49 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' 
to '-Dspark.driver.port=53411' as a work-around.
16/05/17 16:17:49 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(hadoop); users 
with modify permissions: Set(hadoop)
16/05/17 16:17:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/05/17 16:17:49 INFO Remoting: Starting remoting
16/05/17 16:17:50 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkDriver@172.16.28.195:53411]
16/05/17 16:17:50 INFO util.Utils: Successfully started service 'sparkDriver' 
on port 53411.
16/05/17 16:17:50 INFO spark.SparkEnv: Registering MapOutputTracker
16/05/17 16:17:50 INFO spark.SparkEnv: Registering BlockManagerMaster
16/05/17 16:17:50 INFO storage.DiskBlockManager: Created local directory at 
/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/

yarn-cluster mode error

2016-05-17 Thread spark.raj
Hi,
i am getting error below while running application on yarn-cluster mode.
ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM

Anyone can suggest why i am getting this error message?

Thanks
Raj
 

Sent from Yahoo Mail. Get the app

Re: Why does spark 1.6.0 can't use jar files stored on HDFS

2016-05-17 Thread spark.raj
Hi Serega,
Create a jar including all the the dependencies and execute it like below 
through shell script

/usr/local/spark/bin/spark-submit \  //location of your spark-submit
--class classname \  //location of your main classname
--master yarn \
--deploy-mode cluster \
/home/hadoop/SparkSampleProgram.jar  //location of your jar file

ThanksRaj
 

Sent from Yahoo Mail. Get the app 

On Tuesday, May 17, 2016 6:03 PM, Serega Sheypak  
wrote:
 

 hi, I'm trying to:1. upload my app jar files to HDFS2. run spark-submit 
with:2.1. --master yarn --deploy-mode clusteror2.2. --master yarn --deploy-mode 
client
specifying --jars hdfs:///my/home/commons.jar,hdfs:///my/home/super.jar 
When spark job is submitted, SparkSubmit client outputs:Warning: Skip remote 
jar hdfs:///user/baba/lib/akka-slf4j_2.11-2.3.11.jar ...

and then spark application main class fails with class not found exception.Is 
there any workaround?

  

Spark Streaming Application run on yarn-clustor mode

2016-05-19 Thread spark.raj
Hi Friends,
Is spark streaming job will run on yarn-cluster mode? 

ThanksRaj

Sent from Yahoo Mail. Get the app

run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Hi,
I am running spark streaming job on yarn-client mode. If run muliple jobs, some 
of the jobs failing and giving below error message. Is there any configuration 
missing?
ERROR apache.spark.util.Utils - Uncaught exception in thread main
java.lang.NullPointerException
    at 
org.apache.spark.network.netty.NettyBlockTransferService.close(NettyBlockTransferService.scala:152)
    at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1228)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:100)
    at 
org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1749)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1185)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1748)
    at org.apache.spark.SparkContext.(SparkContext.scala:593)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  org.apache.spark.SparkContext - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application 
has already ended! It might have been killed or unable to launch application 
master.
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
    at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
    at org.apache.spark.SparkContext.(SparkContext.scala:523)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  apache.spark.storage.DiskBlockManager - Shutdown hook called
INFO  apache.spark.util.ShutdownHookManager - Shutdown hook called
INFO  apache.spark.util.ShutdownHookManager - Deleting directory 
/tmp/spark-945fa8f4-477c-4a65-a572-b247e9249061/userFiles-857fece4-83c4-441a-8d3e-2a6ae8e3193a
INFO  apache.spark.util.ShutdownHookManager - Deleting directory 
/tmp/spark-945fa8f4-477c-4a65-a572-b247e9249061
 

Sent from Yahoo Mail. Get the app

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Hi Friends,
In the yarn log files of the nodemanager i can see the error below. Can i know 
why i am getting this error.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: 
SIGTERM
ThanksRajesh
 

Sent from Yahoo Mail. Get the app 

On Wednesday, May 25, 2016 1:08 PM, Mich Talebzadeh 
 wrote:
 

 Yes check the yarn log files both resourcemanager and nodemanager. Also ensure 
that you have set up work directories consistently, especially 
yarn.nodemanager.local-dirs 
HTH
Dr Mich Talebzadeh LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 http://talebzadehmich.wordpress.com 
On 25 May 2016 at 08:29, Jeff Zhang  wrote:

Could you check the yarn app logs ?

On Wed, May 25, 2016 at 3:23 PM,  wrote:

Hi,
I am running spark streaming job on yarn-client mode. If run muliple jobs, some 
of the jobs failing and giving below error message. Is there any configuration 
missing?
ERROR apache.spark.util.Utils - Uncaught exception in thread main
java.lang.NullPointerException
    at 
org.apache.spark.network.netty.NettyBlockTransferService.close(NettyBlockTransferService.scala:152)
    at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1228)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:100)
    at 
org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1749)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1185)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1748)
    at org.apache.spark.SparkContext.(SparkContext.scala:593)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  org.apache.spark.SparkContext - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application 
has already ended! It might have been killed or unable to launch application 
master.
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
    at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
    at org.apache.spark.SparkContext.(SparkContext.scala:523)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  apache.spark.storage.DiskBlockManager - Shutdown hook called
INFO  apache.spark.util.ShutdownHookManager - Shutdown hook called
INFO  apache.spark.util.ShutdownHookManager - Deleting directory 
/tmp/spark-945fa8f4-477c-4a65-a572-b247e9249061/userFiles-857fece4-83c4-441a-8d3e-2a6ae8e3193a
INFO  apache.spark.util.ShutdownHookManager - Deleting directory 
/tmp/spark-945fa8f4-477c-4a65-a572-b247e9249061
 

Sent f

Re: run multiple spark jobs yarn-client mode

2016-05-25 Thread spark.raj
Thank you for your help Mich. 

ThanksRajesh
 

Sent from Yahoo Mail. Get the app 

On Wednesday, May 25, 2016 3:14 PM, Mich Talebzadeh 
 wrote:
 

 You may have some memory issues OOM etc that terminated the process. 
Dr Mich Talebzadeh LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 http://talebzadehmich.wordpress.com 
On 25 May 2016 at 10:35,  wrote:

Hi Friends,
In the yarn log files of the nodemanager i can see the error below. Can i know 
why i am getting this error.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: 
SIGTERM
ThanksRajesh
 

Sent from Yahoo Mail. Get the app 

On Wednesday, May 25, 2016 1:08 PM, Mich Talebzadeh 
 wrote:
 

 Yes check the yarn log files both resourcemanager and nodemanager. Also ensure 
that you have set up work directories consistently, especially 
yarn.nodemanager.local-dirs 
HTH
Dr Mich Talebzadeh LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 http://talebzadehmich.wordpress.com 
On 25 May 2016 at 08:29, Jeff Zhang  wrote:

Could you check the yarn app logs ?

On Wed, May 25, 2016 at 3:23 PM,  wrote:

Hi,
I am running spark streaming job on yarn-client mode. If run muliple jobs, some 
of the jobs failing and giving below error message. Is there any configuration 
missing?
ERROR apache.spark.util.Utils - Uncaught exception in thread main
java.lang.NullPointerException
    at 
org.apache.spark.network.netty.NettyBlockTransferService.close(NettyBlockTransferService.scala:152)
    at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1228)
    at org.apache.spark.SparkEnv.stop(SparkEnv.scala:100)
    at 
org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1749)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1185)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1748)
    at org.apache.spark.SparkContext.(SparkContext.scala:593)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  org.apache.spark.SparkContext - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application 
has already ended! It might have been killed or unable to launch application 
master.
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
    at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
    at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
    at org.apache.spark.SparkContext.(SparkContext.scala:523)
    at 
org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:878)
    at 
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:81)
    at 
org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:134)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.init(SparkTweetStreamingHDFSLoad.java:212)
    at 
com.infinite.spark.SparkTweetStreamingHDFSLoad.main(SparkTweetStreamingHDFSLoad.java:162)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
INFO  apache