Hi,

I'm using spark 1.4.1 and i have a simple application that create a dstream
that read data from kafka and apply a filter transformation on it. After
more or less a day throw the following exception:

/Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError:
Java heap space         at
org.apache.spark.util.io.ByteArrayChunkOutputStream.allocateNewChunkIfNeeded(ByteArrayChunkOutputStream.scala:66)
 
at
org.apache.spark.util.io.ByteArrayChunkOutputStream.write(ByteArrayChunkOutputStream.scala:55)
 
at
org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:294) 
at org.xerial.snappy.SnappyOutputStream.flush(SnappyOutputStream.java:273) 
at org.xerial.snappy.SnappyOutputStream.close(SnappyOutputStream.java:324) 
at
org.apache.spark.io.SnappyOutputStreamWrapper.close(CompressionCodec.scala:203) 
at com.esotericsoftware.kryo.io.Output.close(Output.java:168)   at
org.apache.spark.serializer.KryoSerializationStream.close(KryoSerializer.scala:162)
 
at
org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:203)
 
at
org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:102)
 
at
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:85) 
at
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
 
at
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
 
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)     at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:874)
 
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:815)
 
at
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:799)
 
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1426)
 
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
 
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) [Stage
53513:>                                                      (0 + 0) /
4]Exception in thread "JobGenerator" java.lang.OutOfMemoryError: GC overhead
limit exceeded  at
sun.net.www.protocol.jar.Handler.openConnection(Handler.java:41)        at
java.net.URL.openConnection(URL.java:972)       at
java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:237)    at
java.lang.Class.getResourceAsStream(Class.java:2223)    at
org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:38) 
at
org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:98)
 
at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:197)
 
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)        
at
org.apache.spark.SparkContext.clean(SparkContext.scala:1893)    at
org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:294)    at
org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:293)    at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) 
at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)    at
org.apache.spark.rdd.RDD.map(RDD.scala:293)     at
org.apache.spark.streaming.dstream.MappedDStream$$anonfun$compute$1.apply(MappedDStream.scala:35)
 
at
org.apache.spark.streaming.dstream.MappedDStream$$anonfun$compute$1.apply(MappedDStream.scala:35)
 
at scala.Option.map(Option.scala:145)   at
org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
 
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)       at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
 
at
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342)
 
at scala.Option.orElse(Option.scala:257)        at
org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:339) 
at
org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:35)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
 
at
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)/

I dumped the heap of the driver process and seems that 486.2 MB on 512 MB of
the available memory is used by an instance of the class
/org.apache.spark.deploy.yarn.history.YarnHistoryService/. I'm trying to
figure out how to solve the issue but till now i didn't found a solution.

Could someone help me to sort out the issue?



Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Possible-memory-leak-tp26478.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to