on using yarn-cluster, it works good

On Mon, Jun 29, 2015 at 12:07 PM, ram kumar <ramkumarro...@gmail.com> wrote:

> SPARK_CLASSPATH=$CLASSPATH:/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/*
> in spark-env.sh
>
> I think i am facing the same issue
> https://issues.apache.org/jira/browse/SPARK-6203
>
>
>
> On Mon, Jun 29, 2015 at 11:38 AM, ram kumar <ramkumarro...@gmail.com>
> wrote:
>
>> I am using Spark 1.2.0.2.2.0.0-82 (git revision de12451) built for Hadoop
>> 2.6.0.2.2.0.0-2041
>>
>> 1) SPARK_CLASSPATH not set
>> 2) spark.executor.extraClassPath not set
>>
>> should I upgrade my version to 1.3 and check
>>
>> On Sat, Jun 27, 2015 at 1:07 PM, Tathagata Das <t...@databricks.com>
>> wrote:
>>
>>> Do you have SPARK_CLASSPATH set in both cases? Before and after
>>> checkpoint? If yes, then you should not be using SPARK_CLASSPATH, it has
>>> been deprecated since Spark 1.0 because of its ambiguity.
>>> Also where do you have spark.executor.extraClassPath set? I dont see it
>>> in the spark-submit command.
>>>
>>> On Fri, Jun 26, 2015 at 6:05 AM, ram kumar <ramkumarro...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> ---------------------------------------------
>>>>
>>>> JavaStreamingContext ssc = new JavaStreamingContext(conf, new
>>>> Duration(10000));
>>>>         ssc.checkpoint(checkPointDir);
>>>>
>>>> JavaStreamingContextFactory factory = new JavaStreamingContextFactory()
>>>> {
>>>>             public JavaStreamingContext create() {
>>>>                 return createContext(checkPointDir, outputDirectory);
>>>>             }
>>>>
>>>>         };
>>>>         JavaStreamingContext ssc =
>>>> JavaStreamingContext.getOrCreate(checkPointDir, factory);
>>>>
>>>> ----------------------------------------------------
>>>>
>>>> *first time, i run this. It work fine.*
>>>>
>>>> *but, second time. it shows following error.*
>>>> *i deleted the checkpoint path and then it works.*
>>>>
>>>> ---------------------------------------------------
>>>> [user@h7 ~]$ spark-submit --jars /home/user/examples-spark-jar.jar
>>>> --conf spark.driver.allowMultipleContexts=true --class com.spark.Pick
>>>> --master yarn-client --num-executors 10 --executor-cores 1 SNAPSHOT.jar
>>>> Spark assembly has been built with Hive, including Datanucleus jars on
>>>> classpath
>>>> 2015-06-26 12:43:42,981 WARN  [main] util.NativeCodeLoader
>>>> (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library
>>>> for your platform... using builtin-java classes where applicable
>>>> 2015-06-26 12:43:44,246 WARN  [main] shortcircuit.DomainSocketFactory
>>>> (DomainSocketFactory.java:<init>(116)) - The short-circuit local reads
>>>> feature cannot be used because libhadoop cannot be loaded.
>>>>
>>>> This is deprecated in Spark 1.0+.
>>>>
>>>> Please instead use:
>>>>  - ./spark-submit with --driver-class-path to augment the driver
>>>> classpath
>>>>  - spark.executor.extraClassPath to augment the executor classpath
>>>>
>>>> Exception in thread "main" org.apache.spark.SparkException: Found both
>>>> spark.executor.extraClassPath and SPARK_CLASSPATH. Use only the former.
>>>>     at
>>>> org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$7.apply(SparkConf.scala:334)
>>>>     at
>>>> org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$7.apply(SparkConf.scala:332)
>>>>     at scala.collection.immutable.List.foreach(List.scala:318)
>>>>     at
>>>> org.apache.spark.SparkConf$$anonfun$validateSettings$6.apply(SparkConf.scala:332)
>>>>     at
>>>> org.apache.spark.SparkConf$$anonfun$validateSettings$6.apply(SparkConf.scala:320)
>>>>     at scala.Option.foreach(Option.scala:236)
>>>>     at org.apache.spark.SparkConf.validateSettings(SparkConf.scala:320)
>>>>     at org.apache.spark.SparkContext.<init>(SparkContext.scala:178)
>>>>     at
>>>> org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:118)
>>>>     at
>>>> org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:561)
>>>>     at
>>>> org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:561)
>>>>     at scala.Option.map(Option.scala:145)
>>>>     at
>>>> org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:561)
>>>>     at
>>>> org.apache.spark.streaming.api.java.JavaStreamingContext$.getOrCreate(JavaStreamingContext.scala:566)
>>>>     at
>>>> org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(JavaStreamingContext.scala)
>>>>     at
>>>> com.orzota.kafka.kafka.TotalPicsWithScore.main(TotalPicsWithScore.java:159)
>>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>     at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>     at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>>>     at
>>>> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360)
>>>>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:76)
>>>>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>> [user@h7 ~]
>>>>
>>>> ----------------------------------------------
>>>>
>>>> *can anyone help me with it*
>>>>
>>>>
>>>> *thanks*
>>>>
>>>
>>>
>>
>

Reply via email to