Hi all:
I complied new spark 1.4.0 version today. But when I run WordCount demo, it
throws NoSuchMethodError "*java.lang.NoSuchMethodError:
com.fasterxml.jackson.module.scala.deser.BigDecimalDeserialize*r".
I found the default "*fasterxml.jackson.version*" is *2.4.4*. It's there
any wrong with th
Anyone met the same problem like me?
2015-06-12 23:40 GMT+08:00 Tao Li :
> Hi all:
>
> I complied new spark 1.4.0 version today. But when I run WordCount demo,
> it throws NoSuchMethodError "*java.lang.NoSuchMethodError:
> com.fasterxml.jackson.module.scala.deser.BigDecimalD
I have a long live spark application running on YARN.
In some nodes, it try to write to the shuffle path in the shuffle map task.
But the root path /search/hadoop10/yarn_local/usercache/spark/ was deleted,
so the task is failed. So every time when running shuffle map task on this
node, it was alwa
Node cloud10141049104.wd.nm.nop.sogou-op.org and
cloud101417770.wd.nm.ss.nop.sogou-op.org failed too many times, I want to
know if it can be auto offline when failed too many times?
2015-07-06 12:25 GMT+08:00 Tao Li :
> I have a long live spark application running on YARN.
>
> In s
Any Response?
2015-07-06 12:28 GMT+08:00 Tao Li :
>
>
> Node cloud10141049104.wd.nm.nop.sogou-op.org and
> cloud101417770.wd.nm.ss.nop.sogou-op.org failed too many times, I want to
> know if it can be auto offline when failed too many times?
>
> 2015-07-06 12:25 GMT+08:00
I am using spark streaming kafka direct approach these days. I found that
when I start the application, it always start consumer the latest offset. I
hope that when application start, it consume from the offset last
application consumes with the same kafka consumer group. It means I have to
maintai
Hi all:
I am using spark streaming(1.3.1) as a long time running service and out
of memory after running for 7 days.
I found that the field *authorizedCommittersByStage* in
*OutputCommitCoordinator* class cause the OOM.
authorizedCommittersByStage is a map, key is StageId, value is
Map[Partition
Hi all:
In my application, I start SparkContext sc and execute some task on sc.
(Each task is a thread, which execute some transform and action on RDDs)
For each task, I use "sc.setJobGroup(JOB_GROUPID, JOB_DESCRIPTION)" to
set jobGroup for each task.
But when I call "sc.cancelJobGroup