Thanks Guozhang for the pointer to the mapped NIO. The issue in my case was related to the disk still being out of space (I thought I did free up some, but I actually didn't). Curiously, I ran out of space on two occasions. In one case the error message was clear "No space left on device", and in another case that was the cryptic InternalError I mentioned previously.
2014-11-17 20:24 GMT+03:00 Guozhang Wang <wangg...@gmail.com>: > This is interesting as I have not seen it before. Searched a bit on the web > and this seems promising? > > > http://stackoverflow.com/questions/2949371/java-map-nio-nfs-issue-causing-a-vm-fault-a-fault-occurred-in-a-recent-uns > > Guozhang > > On Fri, Nov 14, 2014 at 5:38 AM, Yury Ruchin <yuri.ruc...@gmail.com> > wrote: > > > Hello, > > > > I've run into an issue with Kafka 0.8.1.1 broker. The broker stopped > > working after the disk it was writing to ran out of space. I freed up > some > > space and tried to restart the broker. It started some recovery > procedure, > > but after some short time in the logs I see the following strange error > > message: > > > > FATAL kafka.server.KafkaServerStartable - Fatal error during > > KafkaServerStable startup. Prepare to shutdown > > java.lang.InternalError: a fault occurred in a recent unsafe memory > access > > operation in compiled Java code > > at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39) > > at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) > > at > > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188) > > at > > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165) > > at > > kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66) > > at > kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58) > > at kafka.log.LogSegment.recover(LogSegment.scala:165) > > at kafka.log.Log.recoverLog(Log.scala:179) > > at kafka.log.Log.loadSegments(Log.scala:155) > > at kafka.log.Log.<init>(Log.scala:64) > > at > > > > > kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118) > > at > > > > > kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113) > > at > > > > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > at > > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105) > > at > > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113) > > at > > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105) > > at > > > > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > at > > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > > at kafka.log.LogManager.loadLogs(LogManager.scala:105) > > at kafka.log.LogManager.<init>(LogManager.scala:57) > > at > kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275) > > at kafka.server.KafkaServer.startup(KafkaServer.scala:72) > > at > > kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34) > > at kafka.Kafka$.main(Kafka.scala:46) > > at kafka.Kafka.main(Kafka.scala) > > > > and then everything starts over. I've been waiting for a while, but the > > broker keeps restarting. How can I bring it back to life? > > > > Thanks! > > > > > > -- > -- Guozhang >