[Spark-1.4.0] NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer

2015-06-12 Thread Tao Li
Hi all: I complied new spark 1.4.0 version today. But when I run WordCount demo, it throws NoSuchMethodError "*java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserialize*r". I found the default "*fasterxml.jackson.version*" is *2.4.4*. It's there any wrong with th

Re: [Spark-1.4.0] NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer

2015-06-12 Thread Tao Li
Anyone met the same problem like me? 2015-06-12 23:40 GMT+08:00 Tao Li : > Hi all: > > I complied new spark 1.4.0 version today. But when I run WordCount demo, > it throws NoSuchMethodError "*java.lang.NoSuchMethodError: > com.fasterxml.jackson.module.scala.deser.BigDecimalD

Can we allow executor to exit when tasks fail too many time?

2015-07-05 Thread Tao Li
I have a long live spark application running on YARN. In some nodes, it try to write to the shuffle path in the shuffle map task. But the root path /search/hadoop10/yarn_local/usercache/spark/ was deleted, so the task is failed. So every time when running shuffle map task on this node, it was alwa

Re: Can we allow executor to exit when tasks fail too many time?

2015-07-05 Thread Tao Li
​ Node cloud10141049104.wd.nm.nop.sogou-op.org and cloud101417770.wd.nm.ss.nop.sogou-op.org failed too many times, I want to know if it can be auto offline when failed too many times? 2015-07-06 12:25 GMT+08:00 Tao Li : > I have a long live spark application running on YARN. > > In s

Re: Can we allow executor to exit when tasks fail too many time?

2015-07-07 Thread Tao Li
Any Response? 2015-07-06 12:28 GMT+08:00 Tao Li : > > ​ > Node cloud10141049104.wd.nm.nop.sogou-op.org and > cloud101417770.wd.nm.ss.nop.sogou-op.org failed too many times, I want to > know if it can be auto offline when failed too many times? > > 2015-07-06 12:25 GMT+08:00

Need to maintain the consumer offset by myself when using spark streaming kafka direct approach?

2015-12-08 Thread Tao Li
I am using spark streaming kafka direct approach these days. I found that when I start the application, it always start consumer the latest offset. I hope that when application start, it consume from the offset last application consumes with the same kafka consumer group. It means I have to maintai

[DAGSchedule][OutputCommitCoordinator] OutputCommitCoordinator.authorizedCommittersByStage Map Out Of Memory

2015-04-07 Thread Tao Li
Hi all: I am using spark streaming(1.3.1) as a long time running service and out of memory after running for 7 days. I found that the field *authorizedCommittersByStage* in *OutputCommitCoordinator* class cause the OOM. authorizedCommittersByStage is a map, key is StageId, value is Map[Partition

[Spare Core] function SparkContext.cancelJobGroup(groupId) doesn't work

2015-01-07 Thread Tao Li
Hi all: In my application, I start SparkContext sc and execute some task on sc. (Each task is a thread, which execute some transform and action on RDDs) For each task, I use "sc.setJobGroup(JOB_GROUPID, JOB_DESCRIPTION)" to set jobGroup for each task. But when I call "sc.cancelJobGroup