Thanks Karolis.

On Wed, 4 Sep, 2019, 5:57 PM Karolis Pocius,
<karolis.poc...@sentiance.com.invalid> wrote:

> I had the same issue which was solved by increasing max_map_count
> https://stackoverflow.com/a/43675621
>
>
> On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K <senthilec...@gmail.com>
> wrote:
>
> > Hello Experts , We have deployed 10 node kafka cluster in production.
> > Recently two of the nodes went down due to network problem and we brought
> > it up after 24 hours. At the time of bootstrapping the  kafka service on
> > the failed nodes , we have seen the below error & broker failed to come
> up.
> >
> > Kafka Version : kafka_2.11-2.2.0
> >
> > JVM Options :
> > /a/java64/jdk1.8.0/bin/java -Xmx15G -Xms10G -server -XX:+UseG1GC
> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> > -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
> > -Xloggc:/a/opt/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc
> > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> -XX:GCLogFileSize=100M
> > -Davoid_insecure_jmxremote
> >
> >
> > [2019-09-03 10:54:10,630] ERROR Error while deleting the clean shutdown
> > file in dir /tmp/data (kafka.server.LogDirFailureChannel)
> > java.io.IOException: Map failed
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
> >     at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
> >     at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:53)
> >     at kafka.log.LogSegment$.open(LogSegment.scala:632)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:467)
> >     at
> >
> >
> kafka.log.Log$$anonfun$kafka$log$Log$$loadSegmentFiles$3.apply(Log.scala:454)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >     at
> >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> >     at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> >     at
> >
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >     at kafka.log.Log.kafka$log$Log$$loadSegmentFiles(Log.scala:454)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply$mcV$sp(Log.scala:565)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log$$anonfun$loadSegments$1.apply(Log.scala:559)
> >     at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2024)
> >     at kafka.log.Log.loadSegments(Log.scala:559)
> >     at kafka.log.Log.<init>(Log.scala:292)
> >     at kafka.log.Log$.apply(Log.scala:2157)
> >     at
> > kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:265)
> >     at
> >
> >
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:345)
> >     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> >     at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >     at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >     at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >     at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
> >     ... 25 more
> >
> > Any hint to solve this problem ? Thanks in advance!
> >
> > --Senthil
> >
>

Reply via email to