Hi team,

I started 3 kafka servers on same machine listening to ports 9092,9093 and 9094 
having their dedicated LOG and Data directories.
I have created a topic with 1 partition and a replication factor of 3. Here is 
the code.

NewTopic topic = new NewTopic("Queue", 1, (short) 3);

Then I wrote a Producer using Producer API

producer.send(new ProducerRecord<String, String>("queue", Integer.toString(i), 
Integer.toString(i)));

When the message is actually sent, one of the servers reported 

[2020-07-13 12:00:46,997] TRACE [Broker id=2] Handling LeaderAndIsr request 
correlationId 4 from controller 0 epoch 1 starting the become-leader transition 
for partition queue-0 (state.change.logger)
[2020-07-13 12:00:47,031] ERROR [Broker id=2] Error while processing 
LeaderAndIsr request correlationId 4 received from controller 0 epoch 1 for 
partition queue-0 (state.change.logger)
java.io.IOException: The requested operation cannot be performed on a file with 
a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
        at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:186)
        at kafka.log.AbstractIndex.resize(AbstractIndex.scala:172)
        at 
kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:238)
        at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:238)
        at kafka.log.LogSegment.recover(LogSegment.scala:380)
        at kafka.log.Log.recoverSegment(Log.scala:632)
        at kafka.log.Log.recoverLog(Log.scala:771)
        at kafka.log.Log.$anonfun$loadSegments$3(Log.scala:707)
        at 
scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
        at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2329)
        at kafka.log.Log.loadSegments(Log.scala:707)
        at kafka.log.Log.<init>(Log.scala:297)
        at kafka.log.Log$.apply(Log.scala:2463)
        at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:760)
        at kafka.log.LogManager.getOrCreateLog(LogManager.scala:715)
        at kafka.cluster.Partition.createLog(Partition.scala:308)
        at kafka.cluster.Partition.createLogIfNotExists(Partition.scala:292)
        at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:489)
        at kafka.cluster.Partition.makeLeader(Partition.scala:478)
        at 
kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1360)
        at 
scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
        at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
        at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
        at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1358)
        at 
kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1248)
        at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:221)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:132)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
        at java.lang.Thread.run(Unknown Source)

Can someone help here ?








Reply via email to