Can you take a look at KAFKA-5470 ?

On Tue, Oct 17, 2017 at 2:32 AM, 杨文 <iamyo...@gmail.com> wrote:

> Hi Kafka Users,
> We are using kafka 0.9.0.1 and frequently we have seen below exception
> which causes the broker
> to die.We even increased the MaxDirectMemory to 1G but still see this.
> 2017-02-16 00:55:57,750] ERROR Processor got uncaught exception.
> (kafka.network.Processor)
> java.lang.OutOfMemoryError: Direct buffer memory
>         at java.nio.Bits.reserveMemory(Unknown Source)
>         at java.nio.DirectByteBuffer.<init>(Unknown Source)
>         at java.nio.ByteBuffer.allocateDirect(Unknown Source)
>         at sun.nio.ch.Util.getTemporaryDirectBuffer(Unknown Source)
>         at sun.nio.ch.IOUtil.read(Unknown Source)
>         at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
>         at org.apache.kafka.common.network.PlaintextTransportLayer.read(
> PlaintextTransportLayer.java:108)
>         at org.apache.kafka.common.network.NetworkReceive.
> readFromReadableChannel(NetworkReceive.java:97)
>         at org.apache.kafka.common.network.NetworkReceive.
> readFrom(NetworkReceive.java:71)
>         at org.apache.kafka.common.network.KafkaChannel.receive(
> KafkaChannel.java:153)
>         at org.apache.kafka.common.network.KafkaChannel.read(
> KafkaChannel.java:134)
>         at org.apache.kafka.common.network.Selector.poll(
> Selector.java:286)
>         at kafka.network.Processor.run(SocketServer.scala:413)
>         at java.lang.Thread.run(Unknown Source)
> Any pointers to what other metrics/configuration we can check to
> determine the root cause
> of this problem?
> We have 32 nodes with 2 TB each and we are ingesting approx 3TB per
> day into kafka.(Also reading
> the same amount of data from hadoop MR jobs)
> queue replication is 3. Most other parameters are default. Kafka
> broker process hasXmx=1G
> and MaxDirectMemory=1G
> Thanks,-Vinay
>

Reply via email to