Hello,

I suppose I must confirm that I have read the following:

https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-WhydoIseeerror%22Shouldnotsetlogendoffsetonpartition%22inthebrokerlog?

I have a 3 node cluster (3 zookeepers and 3 brokers - 3 different physical
servers). my advertised.listeners string is set (in all three brokers) to
be:

advertised.listeners=PLAINTEXT://0.0.0.0:9092

When I created a new topic I am getting the following stack on one of my
brokers:

>
> [2017-07-21 14:44:06,112] ERROR [ReplicaFetcherThread-0-2], Error for
> partition [40,0] to broker
> 2:org.apache.kafka.common.errors.UnknownServerException: The server
> experienced an unexpected error when processing
>  the request (kafka.server.ReplicaFetcherThread)
> [2017-07-21 14:44:07,128] ERROR [KafkaApi-3] Error when handling request
> {replica_id=3,max_wait_time=500,min_bytes=1,max_bytes=10485760,topics=[{topic=40,partitions=[{partition=0,fetch_offset=0,max_bytes=1048576}
> ]}]} (kafka.server.KafkaApis)
> kafka.common.KafkaException: Should not set log end offset on partition
> 40-0's local replica 3
>         at kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:88)
>         at kafka.cluster.Replica.updateLogReadResult(Replica.scala:75)
>         at
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:238)
>         at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:918)
>         at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:915)
>         at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:915)
>         at
> kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)
>         at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:530)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:81)
>         at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:62)
>         at java.lang.Thread.run(Thread.java:745)
> [2017-07-21 14:44:07,128] ERROR [ReplicaFetcherThread-0-2], Error for
> partition [40,0] to broker
> 2:org.apache.kafka.common.errors.UnknownServerException: The server
> experienced an unexpected error when processing
>  the request (kafka.server.ReplicaFetcherThread)


My zookeeper_shell.bat analysis with get /brokers/ids/1,2,3 reveals the
following:

>
> get /brokers/ids/1
>
> {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://
> 0.0.0.0:9092
> "],"jmx_port":-1,"host":"0.0.0.0","timestamp":"1500646657734","port":9092,"version":4}
> cZxid = 0xe0000000f
> ctime = Fri Jul 21 14:17:37 UTC 2017
> mZxid = 0xe0000000f
> mtime = Fri Jul 21 14:17:37 UTC 2017
> pZxid = 0xe0000000f
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x15d6582c70b0001
> dataLength = 184
> numChildren = 0
> get /brokers/ids/2
>
> {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://
> 0.0.0.0:9092
> "],"jmx_port":-1,"host":"0.0.0.0","timestamp":"1500646657006","port":9092,"version":4}
> cZxid = 0xe0000000b
> ctime = Fri Jul 21 14:17:37 UTC 2017
> mZxid = 0xe0000000b
> mtime = Fri Jul 21 14:17:37 UTC 2017
> pZxid = 0xe0000000b
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x15d6582c70b0000
> dataLength = 184
> numChildren = 0
> get /brokers/ids/3
>
> {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://
> 0.0.0.0:9092
> "],"jmx_port":-1,"host":"0.0.0.0","timestamp":"1500646656895","port":9092,"version":4}
> cZxid = 0xe00000008
> ctime = Fri Jul 21 14:17:36 UTC 2017
> mZxid = 0xe00000008
> mtime = Fri Jul 21 14:17:36 UTC 2017
> pZxid = 0xe00000008
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x35d6582c7800000
> dataLength = 184
> numChildren = 0



Unless I have mislead myself thinking into it, my default interfaces are
bound to port 9092 and that is per broker instance - so the host:port name
registration should be good. Does this mean that the zookeeper record is
somehow corrupted? or any other area?

As always, any help is much appreciated.

Regards,

Reply via email to