failed to compile: java.lang.NullPointerException
Hi,
We have a spark job that reads AVRO data from a S3 location , does some
processing and writes it back to S3. Of late it has been failing with the
exception below,
Application application_1529346471665_0020 failed 1 times due to AM Cont
Hi,
I get
java.lang.NullPointerException at
org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:128)
When I try to createDataFrame using the sparkSession, see below:
SparkConf conf = new SparkConf().setMaster().setAppName("test")
Hi,
I am trying to submit a job to spark to count number of words in a specific
kafka topic but I get below exception when I check the log:
. failed with unrecoverable exception: java.lang.NullPointerException
The command that I run follows:
./scripts/dm-spark-submit.sh --class
ubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown
Source)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNe
(0 +
> 2) / 2]
> [Stage 9:=> (1 +
> 1) / 2]WARN 2016-05-02 17:23:55,240
> org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 9.0 (TID
> 11, 10.0.6.200): ja
Maybe you were trying to embed pictures for the error and your code - but
they didn't go through.
On Mon, May 2, 2016 at 10:32 AM, meson10 wrote:
> Hi,
>
> I am trying to save a RDD to Cassandra but I am running into the following
> error:
>
>
>
> The Python code looks like this:
>
>
> I am usin
Hi,
I am trying to save a RDD to Cassandra but I am running into the following
error:
The Python code looks like this:
I am using DSE 4.8.6 which runs Spark 1.4.2
I ran through a bunch of existing posts on this mailing lists and have
already performed the following routines:
* Ensure that
> On 11 Dec 2015, at 05:14, michael_han wrote:
>
> Hi Sarala,
> I found the reason, it's because when spark run it still needs Hadoop
> support, I think it's a bug in Spark and still not fixed now ;)
>
It's related to how the hadoop filesystem apis are used to access pretty much
every filesys
Hi Sarala,
I found the reason, it's because when spark run it still needs Hadoop
support, I think it's a bug in Spark and still not fixed now ;)
After I download winutils.exe and following the steps from bellow
workaround, it works fine:
http://qnalist.com/questions/4994960/run-spark-unit-test-on-
1.5.2-hadoop2.6.0.jar
with error as before: Spark Java.lang.NullPointerException
spark-submit --master local --name "SparkTest App" --class
com.qad.SparkTest1 target/Spark-Test-1.0.jar --jars
c:/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar
--
View this message in context
omize the Twitter Example TD did by only printing
> messages that have a GeoLocation.
>
> I am getting a NullPointerException:
>
> java.lang.NullPointerException
> at Twitter$$anonfun$1.apply(Twitter.scala:64)
> at Twitter$$anonfun$1.apply(
Hello!
I am trying to customize the Twitter Example TD did by only printing
messages that have a GeoLocation.
I am getting a NullPointerException:
java.lang.NullPointerException
at Twitter$$anonfun$1.apply(Twitter.scala:64)
at Twitter$$anonfun$1.apply(Twitter.scala:64
Thanks
> Best Regards
>
> On Thu, Oct 23, 2014 at 5:59 AM, arthur.hk.c...@gmail.com
> wrote:
> Hi,
>
> I got java.lang.NullPointerException. Please help!
>
>
> sqlContext.sql("select l_orderkey, l_linenumber, l_partkey, l_quantity,
> l_shipdate, L_RETURNFLAG
Usually when the SparkContext throws an NPE it means that it has been shut
down due to some earlier failure.
On Wed, Oct 22, 2014 at 5:29 PM, arthur.hk.c...@gmail.com <
arthur.hk.c...@gmail.com> wrote:
> Hi,
>
> I got java.lang.NullPointerException. Please help!
>
>
>
Not sure if this would help, but make sure you are having the column
l_linestatus in the data.
Thanks
Best Regards
On Thu, Oct 23, 2014 at 5:59 AM, arthur.hk.c...@gmail.com <
arthur.hk.c...@gmail.com> wrote:
> Hi,
>
> I got java.lang.NullPointerException. Please help!
>
Hi,
I got java.lang.NullPointerException. Please help!
sqlContext.sql("select l_orderkey, l_linenumber, l_partkey, l_quantity,
l_shipdate, L_RETURNFLAG, L_LINESTATUS from lineitem limit
10").collect().foreach(println);
2014-10-23 08:20:12,024 INFO [sparkDriver-akka.actor.default-
I'm guessing the other result was wrong, or just never evaluated here. The
RDD transforms being lazy may have let it be expressed, but it wouldn't
work. Nested RDD's are not supported.
On Mon, Mar 17, 2014 at 4:01 PM, anny9699 wrote:
> Hi Andrew,
>
> Thanks for the reply. However I did almost t
Hi Andrew,
Thanks for the reply. However I did almost the same thing in another
closure:
val simi=dataByRow.map(point => {
val corrs=dataByRow.map(x => arrCorr(point._2,x._2))
(point._1,corrs)
})
here dataByRow is of format RDD[(Int,Array[Double])] and arrCorr is a
function that I wrote to compu
in the format RDD[(Int,Double)] and the error message is:
>
> org.apache.spark.SparkException: Job aborted: Task 14.0:8 failed more than
> 0
> times; aborting job java.lang.NullPointerException
> at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:827)
&
age is:
org.apache.spark.SparkException: Job aborted: Task 14.0:8 failed more than 0
times; aborting job java.lang.NullPointerException
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:827)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1
20 matches
Mail list logo