[ 
https://issues.apache.org/jira/browse/KAFKA-682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546276#comment-13546276
 ] 

Jun Rao commented on KAFKA-682:
-------------------------------

BoundedByteBufferReceive is used for receiving client requests. Most of the 
space is likely taken by ProducerRequest. If you are sending many large 
ProducerRequests, the result in the head dump makes sense. Do you still see 
OOME with the new JVM setting? You heap size seems small. I would try 3-4GBs.
                
> java.lang.OutOfMemoryError: Java heap space
> -------------------------------------------
>
>                 Key: KAFKA-682
>                 URL: https://issues.apache.org/jira/browse/KAFKA-682
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8
>         Environment: $ uname -a
> Linux rngadam-think 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:32:08 UTC 
> 2012 i686 i686 i686 GNU/Linux
> $ java -version
> java version "1.7.0_09"
> OpenJDK Runtime Environment (IcedTea7 2.3.3) (7u9-2.3.3-0ubuntu1~12.04.1)
> OpenJDK Server VM (build 23.2-b09, mixed mode)
>            Reporter: Ricky Ng-Adam
>         Attachments: java_pid22281.hprof.gz, java_pid22281_Leak_Suspects.zip
>
>
> git pull (commit 32dae955d5e2e2dd45bddb628cb07c874241d856)
> ...build...
> ./sbt update
> ./sbt package
> ...run...
> bin/zookeeper-server-start.sh config/zookeeper.properties
> bin/kafka-server-start.sh config/server.properties
> ...then configured fluentd with kafka plugin...
> gem install fluentd --no-ri --no-rdoc
> gem install fluent-plugin-kafka
> fluentd -c ./fluent/fluent.conf -vv
> ...then flood fluentd with messages inputted from syslog and outputted to 
> kafka.
> results in (after about 10000 messages of 1K each in 3s):
> [2013-01-05 02:00:52,087] ERROR Closing socket for /127.0.0.1 because of 
> error (kafka.network.Processor)
> java.lang.OutOfMemoryError: Java heap space
>     at 
> kafka.api.ProducerRequest$$anonfun$1$$anonfun$apply$1.apply(ProducerRequest.scala:45)
>     at 
> kafka.api.ProducerRequest$$anonfun$1$$anonfun$apply$1.apply(ProducerRequest.scala:42)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>     at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
>     at scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>     at scala.collection.immutable.Range.map(Range.scala:39)
>     at kafka.api.ProducerRequest$$anonfun$1.apply(ProducerRequest.scala:42)
>     at kafka.api.ProducerRequest$$anonfun$1.apply(ProducerRequest.scala:38)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
>     at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
>     at scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
>     at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:227)
>     at scala.collection.immutable.Range.flatMap(Range.scala:39)
>     at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:38)
>     at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:32)
>     at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:32)
>     at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:47)
>     at kafka.network.Processor.read(SocketServer.scala:298)
>     at kafka.network.Processor.run(SocketServer.scala:209)
>     at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to